Re: How do I determine if versions of phpMyAdmin before 4.8.5 is SQL Injectable using sqlmap?

2019-04-17 Thread shawn l.green

Hello,

On 4/17/2019 10:29 AM, Turritopsis Dohrnii Teo En Ming wrote:

Subject/Topic: How do I determine if versions of phpMyAdmin before 4.8.5 is SQL 
Injectable using sqlmap?

Good evening from Singapore,

Our customer (company name is Confidential/not disclosed) reported that their 
MySQL database has been found missing or was deleted a few times.


While it is bad form to explain how to break into anyone's software 
(including our own), there are places you can look to get a better idea 
about what might have happened:


1 - the database may have been removed by a DROP DATABASE command.

General Query Log - this will show you which session issued the command 
and the command itself.


Audit log (only for commercial releases) - same thing

Binary Log - Should have a record of the command executing. But, 
depending on which account was used or if Binary Log filtering is in 
place, it may not. This presumes that the Binary Log is even enabled on 
this system.  Many people mistakenly believe it is only for Replication 
when its other primary use is for point-in-time recovery. If your 
customer has a recent backup and all of the Binary Log files created 
since that backup, they could return the system to the point it was at 
just before that database went missing, skip that DROP command, then 
continue rolling forward the changes to the other tables to return to a 
"current" state of their data.


2 - The database was "dropped" by either changing privileges to the 
folder or by removing it from disk or some other file-level or 
system-level operation. Either of those would cause errors to start 
appearing in the MySQL Error Log because a resource that mysqld thinks 
should exist is no longer available.   While the Error Log can't tell 
you which operation made those files "no longer available" it will have 
a fingerprint that such an action happened outside of mysqld.



Have you determined which method was used to make that database/schema 
disappear?


A normal DROP command (which could happen through an SQL injection 
attack) would not leave messages in the Error Log about "unable to 
access ..." or something similar. The server (mysqld) would know that 
the database was gone (because it removed it) and it wouldn't be trying 
to find it or the tables within it for your clients to use it.





... snip ...
No matter how many commands I try, sqlmap always report that phpMyAdmin 4.8.4 
is *NOT* SQL injectable. Perhaps I was using the wrong sqlmap commands all the 
time? The following is one of the many sqlmap commands I have used.

$ python sqlmap.py -u "https://www.EXAMPLE.com/phymyadmin/index.php?id=1; --level=1 
--dbms=mysql --sql-query="drop database"



Privately asking phpMyAdmin may be a better source of information about 
how to hack their system to do things it was not intended to do. This 
list is not about phpMyAdmin and it is very public.  They may also have 
a way of showing you some kind of trace or log that serves as a 
fingerprint for that happening.



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Replication and user privileges

2019-02-26 Thread shawn l.green

Hello Jim,

On 2/25/2019 7:29 PM, Jim wrote:

On 2/25/2019 5:46 PM, shawn l.green wrote:

Hello Jim,

On 2/25/2019 5:04 PM, Jim wrote:

I have a question about mysql replication. I believe I understand most
of it, but have a question about user privileges.

I understand on the master, the replication user must have the
Repl_slave_priv privilege as described here:
https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_replication-slave



My question is about what replication-related users and privileges must
exist on the slave.

So, for example, if an insert on the master that is to be replicated is
performed by user 'abc' with proper insert permissions on the master,
does that same 'abc' user with same insert permissions need to exist on
the slave as well?

In other words, what user is performing the replication operation on the
slave? I don't see any indication of users referenced in the bin logs
that I have examined on the master. Are user and privileges regarding
replicated queries irrelevant on the slave and that is handled all
internally via the replication thread with no regard to user privileges?

Thank you.
Jim



Your final supposition is correct. All privileges were checked and
verified on the master when the original command was executed. The
Replication system on the slave is going to repeat that change as well
as possible given the state of its copy of the data without regards to
"who originally performed this change" on the upstream master.

We do not store credentials in the Binary Log because they are not
important to either of the purposes of the Binary Log

* point-in-time recovery
or
* Replication (which is very much like an automated, continuous
point-in-time recovery)

===

That replication account you mentioned, on the master, is required to
give a slave (and you could have several) enough rights to read the
Binary Log and not much else. This allows you to create an account
that can login from a remote location with the "least privileges"
necessary to do its job. This minimizes your data's exposure should
that account become compromised.

Many other accounts could also have the REPL_SLAVE_PRIV privilege and
any of those could be used by a slave to do the same job. However
losing control over one of those more privileged accounts could pose a
higher risk to your data.




Thanks, Shawn. Your response confirms what I had assumed was happening.

So bottom line... what I plan to do is strip the various
insert/update/delete privileges from appropriate db users on my slaves.
I had placed them there originally because I thought they would be
needed for the replicated queries, but not true based on your response.

I only want the various mysql users used by my code to have select privs
on the slaves so that if somehow a slave was mistakenly written to via a
bug in my code, that write would fail and I would receive the error. The
slaves should only be used for selects and should never experience a write.

That would make sense based on our discussion, correct?

Thanks again.
Jim



As masters and slaves can exchange "positions" or "roles" (it depends on 
how you like to mentally visualize the relationship) within a 
replication graph in a failover situation, adding time to re-establish 
actual permissions using GRANT commands to reset user accounts to their 
old privileges may not be time you want to spend.


A cleaner, simpler solution is to set the --super-read-only flag in the 
server:

https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_super_read_only

That way, you get the behavior you want (no writes to a read-only slave) 
without forcing differences to the content of your privileges tables 
within different nodes of your Replication setup.  Each node will remain 
a transactionally consistent copy of all the others (within the temporal 
limits of replication being an asynchronous process).


Yours,

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Replication and user privileges

2019-02-25 Thread shawn l.green

Hello Jim,

On 2/25/2019 5:04 PM, Jim wrote:

I have a question about mysql replication. I believe I understand most
of it, but have a question about user privileges.

I understand on the master, the replication user must have the
Repl_slave_priv privilege as described here:
https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_replication-slave


My question is about what replication-related users and privileges must
exist on the slave.

So, for example, if an insert on the master that is to be replicated is
performed by user 'abc' with proper insert permissions on the master,
does that same 'abc' user with same insert permissions need to exist on
the slave as well?

In other words, what user is performing the replication operation on the
slave? I don't see any indication of users referenced in the bin logs
that I have examined on the master. Are user and privileges regarding
replicated queries irrelevant on the slave and that is handled all
internally via the replication thread with no regard to user privileges?

Thank you.
Jim



Your final supposition is correct. All privileges were checked and 
verified on the master when the original command was executed. The 
Replication system on the slave is going to repeat that change as well 
as possible given the state of its copy of the data without regards to 
"who originally performed this change" on the upstream master.


We do not store credentials in the Binary Log because they are not 
important to either of the purposes of the Binary Log


* point-in-time recovery
or
* Replication (which is very much like an automated, continuous 
point-in-time recovery)


===

That replication account you mentioned, on the master, is required to 
give a slave (and you could have several) enough rights to read the 
Binary Log and not much else. This allows you to create an account that 
can login from a remote location with the "least privileges" necessary 
to do its job. This minimizes your data's exposure should that account 
become compromised.


Many other accounts could also have the REPL_SLAVE_PRIV privilege and 
any of those could be used by a slave to do the same job. However losing 
control over one of those more privileged accounts could pose a higher 
risk to your data.



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Connections from mysql8.0 to mysql5.1 - bad handshake

2018-11-01 Thread shawn l.green

Hi Jim,

On 10/31/2018 7:12 PM, Halaasz Saandor wrote:

2018/10/31 15:15 ... Jim:

Given the following bug report, what I am trying to do does not sound
hopeful:
https://bugs.mysql.com/bug.php?id=90994


...


Any thoughts or do I need to accept that what I'm attempting just
isn't going to work?


 From the same bug report, id=90994:

[24 Oct 20:17] Brad Jackson

Version 8.0.13 was released on 10/22 and the change list says:

"Support for MySQL 5.5 by MySQL Workbench 8.0 was removed. If you still
need to use MySQL Workbench on a MySQL 5.5 server, you can use MySQL
Workbench 6.3"

I guess we're all stuck on the old version until we upgrade our servers.



Halaasz is on the right track.

As a client (which is what a replication slave really is), MySQL 8.0 
doesn't know how to login to a 5.1 server. Those old handshakes have 
been retired.


We only maintain connection compatibility with the "previous version" 
which, relative to 8.0, is 5.7 . Older versions may work, but it isn't 
guaranteed.


You could, potentially, set up replication chain like this
5.1 -> (is a master of) 5.5 -> 5.6 -> 5.7 -> 8.0

But then you would need to worry if your table definitions (we have 
deprecated and removed some data types since 5.1 was the "current 
version") are even legal in 8.0.  Other language features, like command 
syntaxes, also evolve over time.


There's a lot of deprecation history you will need to worry about. 
Something that is legal to replicate from 5.1 -> 5.5 may no longer be 
legal between 5.5 and 5.6 because 5.5 may have been the last version for 
which that feature was supported.  You may be better off with a 
lift-and-shift upgrade to try to establish a copy of your non-system 
tables and other objects (like stored procedures or views) in an empty 
initialized 8.0 server.  Then you can use a set of 8.0 CREATE USER and 
GRANT commands (you can't use naked GRANT commands to create accounts 
any longer. That feature was deprecated and removed in earlier versions) 
to populate your 8.0 server with user accounts.


Once you reach that stage, you are ready to start testing copies of your 
applications against 8.0 to see what else will need to be updated (such 
as the library you use to connect to MySQL).  Moving from 5.1 to 8.0 is 
a big shift, you potentially have a lot of work ahead of you.



Yours,

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: High cpu usage

2018-10-26 Thread shawn l.green

Hello Machiel,

(I am guessing you can only process top-posts?)

When you stop only the IO thread, you may leave the last event recorded 
to the Relay Log incomplete.


When it gets to that part of the Relay Log, the SQL thread may only be 
part-way through a transaction. It will keep that transaction alive 
waiting for the IO thread to finish downloading the rest of the event 
from the master's copy of the binary log. That partially-complete 
transaction is most likely blocking the ability of your other commands 
to operate more efficiently for several reasons:

 * MVCC
 * InnoDB history length
 * Incomplete transactions to secondary indexes forcing those commands 
to scan the table instead of using the index (related to MVCC)



We made the SQL thread wait so that intermittent networks (a real thing 
years ago) would not "break" replication. We would wait for connectivity 
to resume so that replication could continue.


A safer plan is to stop both threads at the same time. Just use the 
basic STOP SLAVE command instead of the more specific STOP SLAVE IO_THREAD.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.




On 10/26/2018 2:09 AM, Machiel Richards wrote:

Hi Shawn


Thank you for the response.


 In order to pause the slave , the stop the sql_io_thread, and to unpause 
they simply start the thread.


  I have run "show engine innodb status" yes and the threads show 90% as 
sleeping and then a few selects , all from the same table as it does a lot of 
authentications for dial outs.


 I will have a look at the results from SELECT * FROM 
information_schema.INNODB_TRX; during the day as we get this issue regularly 
and will provide feedback.



Regards





(earlier thread below... )



________
From: shawn l.green 
Sent: Thursday, 25 October 2018 9:54:10 PM
To: mysql@lists.mysql.com
Subject: Re: High cpu usage

Hello Machiel,

On 10/25/2018 6:09 AM, Machiel Richards wrote:

Good day all


 Hoping this mail finds you well.


 I am hoping someone can perhaps give us some guidance here as we now seem 
to be stuck on a problem and have not been able to find a solution after more 
than a month.


 We are running an opensips server on Centos 6.5 , using mysql 5.7.13 which 
was installed via Tarball file.


 The server is setup as a slave and master and receives updates from 
opensips config nodes as well as registrations from workers.


 Replication is paused during the day and forward replication (master) is 
disabled at the moment.


 However , we are getting an issue every day on mysql side in terms of 
mysql pushing up server load.



  During the day the server is running fine with a load avg not going above 
1.5 during peak times.


  However in the evening , replication is unpaused, and completes 
processing and catchup within about 15 minutes and is paused again about 30 
minutes after the unpause.



Give or take 45 minutes to an hour after the replication is paused 
again, mysql starts to cause high cpu usage with no apparent processes running 
as can be seen on full processlist (maybe one or two selects which completes 
fairly quickly)


 The higher load, causes queries to slow down however and opensips to 
start timing out on db connections, causing clients to resubmit.


   The resubmits , then obviously causes even more load spiking the mysql 
load to increase as well as the server load and eventually opensips kills 
itself.



  I have looked at the disks, iowaits, memory usage, all is fine.


  We do not see any strange queries or stick queries, no deadlocks, etc... 
only the increase in selects after mysql starts to push up the cpu load.



  We have added all indexes we can find, but even this has made no 
difference at all.


   Currently we are at a loss so I am hoping someone else can assist in 
explaining how else we can find out why mysql is eating up the cpu ...



 The same behaviour can also be seen the moment any new feature is added to 
the server that requires mysql processing to be done, so this does not seem to 
be specifically related to replication, however it does seem like the current 
load from replication causes mysql to act up.


the server is currently running on SSD (recently replaced) , and 8Gb of 
memory with 1 x quadcore CPU.



  should any more info be required, please feel free to ask.




When you say pause replication, what command are you executing on the
slave?

Which end of the system is experiencing the high CPU usage: the master
or the slave?

Have you checked these resources to see what the InnoDB main or
background threads are doing when your CPU starts to spike? (you could
be in a massive rollback)

SHOW ENG

Re: High cpu usage

2018-10-25 Thread shawn l.green

Hello Machiel,

On 10/25/2018 6:09 AM, Machiel Richards wrote:

Good day all


Hoping this mail finds you well.


I am hoping someone can perhaps give us some guidance here as we now seem 
to be stuck on a problem and have not been able to find a solution after more 
than a month.


We are running an opensips server on Centos 6.5 , using mysql 5.7.13 which 
was installed via Tarball file.


The server is setup as a slave and master and receives updates from 
opensips config nodes as well as registrations from workers.


Replication is paused during the day and forward replication (master) is 
disabled at the moment.


However , we are getting an issue every day on mysql side in terms of mysql 
pushing up server load.



 During the day the server is running fine with a load avg not going above 
1.5 during peak times.


 However in the evening , replication is unpaused, and completes processing 
and catchup within about 15 minutes and is paused again about 30 minutes after 
the unpause.



   Give or take 45 minutes to an hour after the replication is paused 
again, mysql starts to cause high cpu usage with no apparent processes running 
as can be seen on full processlist (maybe one or two selects which completes 
fairly quickly)


The higher load, causes queries to slow down however and opensips to 
start timing out on db connections, causing clients to resubmit.


  The resubmits , then obviously causes even more load spiking the mysql 
load to increase as well as the server load and eventually opensips kills 
itself.



 I have looked at the disks, iowaits, memory usage, all is fine.


 We do not see any strange queries or stick queries, no deadlocks, etc... 
only the increase in selects after mysql starts to push up the cpu load.



 We have added all indexes we can find, but even this has made no 
difference at all.


  Currently we are at a loss so I am hoping someone else can assist in 
explaining how else we can find out why mysql is eating up the cpu ...



The same behaviour can also be seen the moment any new feature is added to 
the server that requires mysql processing to be done, so this does not seem to 
be specifically related to replication, however it does seem like the current 
load from replication causes mysql to act up.


   the server is currently running on SSD (recently replaced) , and 8Gb of 
memory with 1 x quadcore CPU.



 should any more info be required, please feel free to ask.




When you say pause replication, what command are you executing on the 
slave?


Which end of the system is experiencing the high CPU usage: the master 
or the slave?


Have you checked these resources to see what the InnoDB main or 
background threads are doing when your CPU starts to spike? (you could 
be in a massive rollback)


SHOW ENGINE INNODB STATUS
SELECT * FROM information_schema.INNODB_TRX;

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc.

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: what is the rationale for not allowing LOCK TABLES in a stored procedure

2018-08-18 Thread shawn l.green

Hello Mogens,

On 8/18/2018 2:32 PM, Mogens Melander wrote:

Guys,

I think I remember this from way back.

You could ask for a lock, and get an OK if it is safe.

Something like, if there is pending transactions, on your target tables,
you would get a NO.

But then again. I could be wrong, and Shawn is the authority on this.



Your request for a lock would have waited until all existing readers or 
writers (depending on the type of lock you asked for) had finished using 
the tables you wanted to lock. By extension, that means that any 
transactions active against the tables you wanted to lock would have 
also needed to have committed or rolled back before your request would 
have been granted. Any new actions against the table would have been 
queued up behind your LOCK request. This has confused more than one DBA 
as they didn't realize that the LOCK was going to be such a tight 
bottleneck.


These kinds of whole table locks live above the blocking/locking 
coordination of the individual storage engines or the transaction 
control code.  They are managed in the "server layer" of our code.


This separation of scope is one reason why blending transactional and 
non-transactional tables in the same data management process is 
generally frowned on. Either be all-transactional (InnoDB) or not. The 
behavior will be easier to predict allowing your developers to use 
either the transaction control commands (BEGIN/COMMIT/ROLLBACK/... ) or 
the LOCK commands with confidence.



Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.







=== original thread ===


On 2018-08-18 23:59, shawn l.green wrote:

Hello Jeff,

On 8/13/2018 12:05 PM, j...@lxvi.net wrote:

Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven't found an explanation as to *why* that is. Could someone please
enlighten me?

Thanks


Normally, the list is more responsive than this. This is a pretty easy
question and someone usually handles those before I need to step in as
a backstop.

The key why you cannot execute a LOCK TABLE command within a stored
program is here:
https://dev.mysql.com/doc/refman/5.7/en/lock-tables-and-transactions.html
###
LOCK TABLES is not transaction-safe and implicitly commits any active
transaction before attempting to lock the tables.
###

Stored programs execute under the scope of the transaction in which
they are started. That determines which sets of rows are "visible" to
the routine and sets boundaries on what may be committed or rolled
back should the need arise.

(a simple example)
* your session: START TRANSACTION
* your session: ...other data activity ...
* your session (INSERT ... )
  * causes an INSERT trigger to fire
* which calls a stored procedure

If that stored procedure or that trigger called a LOCK TABLE command,
it would forcibly COMMIT the existing transaction you had been working
within until that moment.  Your half-completed work would have become
fully committed even if a later step had needed you to issue a
ROLLBACK command.

Note, even if you are not in a multi-statement transaction that any
stored programs called by or executed within the scope of your user
command are part of that little mini (auto-committed) transaction.

Does that help?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: how to select the record with one sql statement?

2018-08-18 Thread shawn l.green

Hello sea,

On 8/13/2018 7:01 PM, sea wrote:

helle,
I have a table, like this:
  pigId  dayweigt
   pig1  2018-1-121
   pig2  2018-1-131
   pig3  2018-1-141
   pig1  2018-1-222
   pig2  2018-1-231
   pig3  2018-1-240
   pig1  2018-1-323  pig2  2018-1-330
   pig3  2018-1-341
   .

only the pig1'weight increase continuously for 3 days.   Giving the input: 
num_of_day(weight increasing continuously for num_of_day);   expecting the 
output: certain_day, pigId;from certain_day,  pigId'weight increasing 
continuously for num_of_day.  How to select the records in one sql 
statement?
 thanks



I've thought about this a bit (since your question appeared on the list) 
and I break down the tasks you need to perform in my head like this. 
(Others on the list may have different ways to approach the same problem)


task 1 - For each bucket, a pigId value, assemble an ordered list (not a 
set) of each weight sorted by time.  (not hard)


task 2 - Within each ordered list, compare the values of every 
consecutive pair.  (several ways to do this)


task 3 - Iterate over those "consecutive value differences" generated in 
task 2 looking for the longest sequence of positive non-zero values for 
each pigId.  (this is not really a set-oriented process so normal SELECT 
or GROUP BY command patterns will not handle it with any efficency)


I'm afraid that attempting all of that sequencing and iteration using 
just a single set-based SQL command is not going to be practical. Using 
one or more cursors within a stored procedure is your best bet for this 
type of sequential trend analysis.


I could easily imagine the first step as a INSERT...SELECT...ORDER BY... 
command going to a new table with an autoincrement column on it (to 
provide a global sequence number across all of your individual pigId 
values) . The second step could do a self join to that table where the 
ON clause could look like

  a.pigId = b.pigID AND a.seq-1 = b.seq

But at that point, counting the length of sequences (and remembering 
when each trend became positive) needs a loop. That's where even 
complicated set-wise SQL fails you and you need to shift into using the 
SQL of stored programs.



Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: what is the rationale for not allowing LOCK TABLES in a stored procedure

2018-08-18 Thread shawn l.green

Hello Jeff,

On 8/13/2018 12:05 PM, j...@lxvi.net wrote:

Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven't found an explanation as to *why* that is. Could someone please
enlighten me?

Thanks


Normally, the list is more responsive than this. This is a pretty easy 
question and someone usually handles those before I need to step in as a 
backstop.


The key why you cannot execute a LOCK TABLE command within a stored 
program is here:

https://dev.mysql.com/doc/refman/5.7/en/lock-tables-and-transactions.html
###
LOCK TABLES is not transaction-safe and implicitly commits any active 
transaction before attempting to lock the tables.

###

Stored programs execute under the scope of the transaction in which they 
are started. That determines which sets of rows are "visible" to the 
routine and sets boundaries on what may be committed or rolled back 
should the need arise.


(a simple example)
* your session: START TRANSACTION
* your session: ...other data activity ...
* your session (INSERT ... )
  * causes an INSERT trigger to fire
* which calls a stored procedure

If that stored procedure or that trigger called a LOCK TABLE command, it 
would forcibly COMMIT the existing transaction you had been working 
within until that moment.  Your half-completed work would have become 
fully committed even if a later step had needed you to issue a ROLLBACK 
command.


Note, even if you are not in a multi-statement transaction that any 
stored programs called by or executed within the scope of your user 
command are part of that little mini (auto-committed) transaction.


Does that help?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to get the MySQL Command-Line Tool to display Unicode properly?

2018-03-15 Thread shawn l.green

Hi Roger,

(please note, this is a bottom-post forum)

On 3/13/2018 7:54 PM, Roger House wrote:
>
> On 03/13/2018 03:11 PM, Reindl Harald wrote:
>>
>>
>> Am 13.03.2018 um 22:59 schrieb Roger House:
>>> In all respects except one, the treatment of Unicode works just fine.
>>> I can write Unicode to database tables, read it, display it, etc.,
>>> with no problems. The exception is mysql, the MySQL Command-Line
>>> Tool. When I execute a SELECT statement to see rows in a table
>>> containing the Venus and Mars Unicode characters, here is what I see
>>> on the screen:
>>>
>>> || Venus | ♀ | | Mars | ♂ | |
>>>
>>> What I should see in the right column are the standard glyphs for
>>> Venus and Mars.
>>>
>>> Any ideas about how to get the MySQL Command-Line Tool to display
>>> Unicode properly?
>> what operating system
>> what terminal
>>
>> all recent Linux systems have UTF8 as default
>>
>
>

I am running Ubuntu MATE 16.04.  I have the problem also on Windows 7
and on Mac OS Version 10.11.6.  I do not think that the problem has to
do with the operating system nor the terminal.  Everything about the
Unicode text works fine in all tools such as editors, the cat command,
etc.  It is only when I am running mysql and I issue a SELECT command to
see what is in a row.  Then the UTF-8 is not rendered properly.  I
believe the problem is with mysql.

Roger



If I presume that your terminal has a code page that is utf8-compatible 
(you say that cat command renders the multibyte characters just fine) 
then it could be your client-side mysql settings that are rendering 
those multibyte characters into individual glyphs based on their 
individual byte values.


The next time you are in mysql and have a chance to look at some utf8 
data, please collect and share these two reports:


status
SHOW GLOBAL VARIABLES like '%haracter%';

(you can obfuscate any sensitive details like server names or addresses)

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimize fails due to duplicate rows error but no duplicates found

2018-02-23 Thread shawn l.green

(please do not top post - see my answer below)

On 2/13/2018 4:00 PM, Machiel Richards wrote:

ok, so we have managed to get an id out of the errors etc... however
when we look in the table that id does not even exist at all.


no idea what is going on here though.



*From:* shawn l.green <shawn.l.gr...@oracle.com>
*Sent:* 13 February 2018 09:51:33 PM
*To:* mysql@lists.mysql.com
*Subject:* Re: Optimize fails due to duplicate rows error but no
duplicates found
Hello Machiel,

On 2/13/2018 3:02 AM, Machiel Richards wrote:

Good day guys,


 I am hoping this mail finds you well.


I am at a bit of a loss here...


 We are trying to run optimize against a table in order to reclaim disk 
space from archived data which has been removed.


 However, after running for over an hour , the optimize fails stating there 
is a duplicate entry in the table.



  We have now spent 2 days using various methods but we are unable to find 
any duplicates in the primary key and also nothing on the unique key fields.


Any idea on why optimize would still be failing ?



Regards



Is it possible that the duplicate keys were the result of
re-partitioning your data where one of the "older" copies was in the
wrong partition as part of an upgrade from an earlier version?

See the entry in
https://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-31.html that
start with...

Incompatible Change; Partitioning: Changes in the KEY partitioning hashing 
functions used with numeric, date and time, ENUM, and SET columns in MySQL 5.5 
makes tables using partitioning or subpartitioning by KEY on any of the 
affected column types and created  on a MySQL 5.5 or later server incompatible 
with a MySQL 5.1 server.

This is because the partition IDs as calculated by a MySQL 5.5 or later
server almost certainly differ from those calculated by a MySQL 5.1
server for the same table definition and data as a result of the changes
in these functions.

A normal indexed lookup against a partitioned table will use
(particularly for a PK value) "partition pruning" .  To see all of your
PK values regardless of which partition they are in, you need to scan
the table and avoid all indexes.

example:
# for a numeric PK column
CREATE TABLE myPK_list SELECT pk from sourcetable WHERE pk+0 > 0;

Then you can check the list in the generated table to find any duplicate
values.

Then you can modify a SELECT command to search each partition or
subpartition individually until you find the rows that are in the wrong
spots.
https://dev.mysql.com/doc/refman/5.7/en/partitioning-selection.html


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/
for details.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql



Then another thing to consider is that someone (during the lifetime of 
this table) changed the character set of your table (possibly changing 
it from using a case-sensitive collation to a case-insensitive 
collation) without actually converting the data on the table to use the 
new character set.


Is the key being duplicated numeric or character-based?

If numeric, is the value being reported as the duplicate at the high end 
of the permitted range of values for that column?


Regards,

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimize fails due to duplicate rows error but no duplicates found

2018-02-13 Thread shawn l.green

Hello Machiel,

On 2/13/2018 3:02 AM, Machiel Richards wrote:

Good day guys,


 I am hoping this mail finds you well.


I am at a bit of a loss here...


 We are trying to run optimize against a table in order to reclaim disk 
space from archived data which has been removed.


 However, after running for over an hour , the optimize fails stating there 
is a duplicate entry in the table.



  We have now spent 2 days using various methods but we are unable to find 
any duplicates in the primary key and also nothing on the unique key fields.


Any idea on why optimize would still be failing ?



Regards



Is it possible that the duplicate keys were the result of 
re-partitioning your data where one of the "older" copies was in the 
wrong partition as part of an upgrade from an earlier version?


See the entry in 
https://dev.mysql.com/doc/relnotes/mysql/5.5/en/news-5-5-31.html that 
start with...

Incompatible Change; Partitioning: Changes in the KEY partitioning hashing 
functions used with numeric, date and time, ENUM, and SET columns in MySQL 5.5 
makes tables using partitioning or subpartitioning by KEY on any of the 
affected column types and created on a MySQL 5.5 or later server incompatible 
with a MySQL 5.1 server. This is because the partition IDs as calculated by a 
MySQL 5.5 or later server almost certainly differ from those calculated by a 
MySQL 5.1 server for the same table definition and data as a result of the 
changes in these functions.


A normal indexed lookup against a partitioned table will use 
(particularly for a PK value) "partition pruning" .  To see all of your 
PK values regardless of which partition they are in, you need to scan 
the table and avoid all indexes.


example:
# for a numeric PK column
CREATE TABLE myPK_list SELECT pk from sourcetable WHERE pk+0 > 0;

Then you can check the list in the generated table to find any duplicate 
values.


Then you can modify a SELECT command to search each partition or 
subpartition individually until you find the rows that are in the wrong 
spots.

https://dev.mysql.com/doc/refman/5.7/en/partitioning-selection.html


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Examples of savepoints and transactions

2018-01-24 Thread shawn l.green

Hello Lars,

On 1/24/2018 8:50 AM, Johan De Meersman wrote:

What you're looking for is simple backup and restore :-)

Savepoints are, simply put, markers within a transaction; allowing you to 
rollback only part of a transaction instead of the whole thing. A commit will 
inevitably commit the ENTIRE transactions, and thus remove the savepoints.

A typical workflow for the kind of thing you're trying to do is to have your (automated) 
testing framework restore last night's backup after the test run. You could also make a 
backup before the test run and restore that afterwards; have an automated nightly db copy 
from prod to dev; or in very specific cases you could simply have your test system revert 
the data by issuing the "reverse" queries - although that one is rarely an 
option in real life.

Another alternative would be to take a filesystem (or virtual machine) 
snapshot, and revert to that after the tests. Filesystem snapshots will require 
your database to be stopped and started, though.

/Johan


- Original Message -

From: "Lars Nielsen" <l...@lfweb.dk>
To: "MySql" <mysql@lists.mysql.com>
Sent: Tuesday, 23 January, 2018 23:19:29
Subject: Re: Examples of savepoints and transactions



Den 22-01-2018 kl. 22:01 skrev shawn l.green:

Hello Lars,

On 1/21/2018 3:37 PM, Lars Nielsen wrote:

Hi,
I have a system that uses begin and commit transactions. It works
like a dream! ;)
Now I want to test it by creating test data. This how ever cannot be
rolled back. I think the solution for rolling back test data is to
use savepoints and rollback. I think it is hard to find examples of
this scenario. Are there some good guides or tutorials out there
somewhere? Any suggestions are welcome.

Best regards
Lars Nielsen



Can you mock up an example (a simple text walkthrough) of how you
think a savepoint should work with what you are calling "test data" ?
I think that the term "test data" is too general to make much sense to
most of us in the context you described.


Yours,

Hello Shawn,
Thanks for your interest. Here is an example of my idea.

I have a php site working through PDO connections. I insert some data
through php like this :

|START TRANSACTION; SELECT @A:=SUM(salary) FROM table1 WHERE type=1;
UPDATE table2 SET summary=@A WHERE type=1; INSERT INTO table3 values (x,
y, z); COMMIT; |||

||Now I want to do automated tests that create "dummy" data that i want
to remove after the test has finished:
like this :

|SAVEPOINT autotest1; START TRANSACTION; SELECT @A:=SUM(salary) FROM
table1 WHERE type=1; UPDATE table2 SET summary=@A WHERE type=1; INSERT
INTO table3 values (x, y, z); COMMIT; -- DO OTHER TRANSACTIONAL
OPERATIONS ROLLBACK TO autotest1; |||

||All done. I have tested the application and have cleaned up the dummy
test-data.

The issue is that when I call the first commit then the savepoint is
deleted.

Is this possible at all?

Regards Lars

||






Is table2 what you want to return to its earlier state?

Other techniques to do what Johan suggested include:
* Make a copy of your "base" data for each test run. That way you don't 
change your starting point. When that test run is over, drop the copy. 
This way your "data to be tested" exists (or ceases to exist) outside 
the boundaries of the transactions you are creating to test/change that 
data but the original state of that data persists somewhere else.


* Use a non-transactional storage engine for table3 (being 
non-transactional means that the changes you store there will not be 
affected by a ROLLBACK or COMMIT. They become "permanent" the moment you 
do them).


Yours,

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Examples of savepoints and transactions

2018-01-22 Thread shawn l.green

Hello Lars,

On 1/21/2018 3:37 PM, Lars Nielsen wrote:

Hi,
I have a system that uses begin and commit transactions. It works like a dream! 
;)
Now I want to test it by creating test data. This how ever cannot be rolled 
back. I think the solution for rolling back test data is to use savepoints and 
rollback. I think it is hard to find examples of this scenario. Are there some 
good guides or tutorials out there somewhere? Any suggestions are welcome.

Best regards
Lars Nielsen



Can you mock up an example (a simple text walkthrough) of how you think 
a savepoint should work with what you are calling "test data" ?  I think 
that the term "test data" is too general to make much sense to most of 
us in the context you described.



Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can't get my query to return wanted data

2018-01-19 Thread shawn l.green

Hello Chris,

On 1/19/2018 12:50 AM, Chris Roy-Smith wrote:

Hi
I am running mysql 5.7.20 in ubuntu linux 17.10

I have 2 tables, member and status with contents like

member:
ident,  given,  surname
1   fredjones
2   johnhoward
3   henry   wales
4   jenny   brown

status:
ident   year
1   2017
2   2017
3   2017
4   2017
1   2018
3   2018

I want my query to return the name and ident from the member table for all
members that has not got an entry in status with year=2018

I have been working on the following query to achieve this, but it only
returns data when there is no `year` entries for a selected year.

select details.ident, given, surname from details left join status on
details.ident = status.ident where NOT EXISTS (select year from status
where (status.year = 2018) and (details.ident = status.ident) )


Thank you for looking at this.
regards, Chris Roy-Smith




try this...

SELECT
  d.ident, d.given, d.surname
FROM details d
LEFT JOIN (
  SELECT DISTINCT ident
  FROM status
  WHERE year=2018
) s
  ON s.ident = d.ident
WHERE
  s.ident is NULL;

How it works
#
Start by building a list of unique `ident` values that match the 
condition you do NOT want to find. (you will see why in a moment)


LEFT JOIN that list to your list of members (with your list on the right 
side of the LEFT JOIN).  Where that join's ON condition is satisfied, a 
value for the column s.ident will exist. Where it isn't satisfied, there 
will be a NULL value in s.ident.


Finally, filter the combination of the s and d tables (I'm referring to 
their aliases) to find all the rows where s.ident was not given a value 
because it did not satisfy the ON condition of your outer join.

#

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Question about contributing a patch

2017-10-09 Thread shawn l.green



On 10/9/2017 3:27 AM, Xiaoyu Wang wrote:

Hello,I reported a bug, at https://bugs.mysql.com/bug.php?id=87637, as well as 
a patch. And Bogdan, the bug hunter, told me this patch would show up on the 
dev contribution report. So, could anyone please tell me how to contact dev 
team, or how can I know the progress about integrating the patch. By the way, I 
signed Oracle Contributor Agreement.
Any reply would be a great help.
Thanks, sincerely
Xiaoyu



Hello Xiaoyu,

Your interaction with the developers will happen through your bug report 
just as it did with our bug report handling team. If they need any 
details or if they need to engage with you again that is where they will 
contact you.


As to the integration of your fix into our code... that gets more 
complicated. There may be edge cases or use cases that need us to modify 
your code to handle. Sometimes these are found as the developer applies 
your patch to our code, sometimes with post-build unit testing, 
sometimes only after full integration testing.


And when that work may start depends on when a developer is scheduled to 
work on the specific bug you designed the patch for. So it could be a 
while.


Thank you very much for helping MySQL to become a better product!

Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: innodb_read_only issues

2017-09-07 Thread shawn l.green

Hello Josh,

On 9/6/2017 11:01 PM, Josh Paetzel wrote:

I've followed the instructions at
https://dev.mysql.com/doc/refman/5.7/en/innodb-read-only-instance.html

Which starts with:

14.6.2 Configuring InnoDB for Read-Only Operation

You can now query InnoDB tables where the MySQL data directory is on
read-only media, by enabling the --innodb-read-only configuration option
at server startup.

Exactly what I want to do.

However the server bails out trying to create files.

2017-09-07T02:12:33.688368Z 0 [Note] InnoDB: Started in read only mode
2017-09-07T02:12:33.688405Z 0 [Note] InnoDB: Mutexes and rw_locks use
GCC atomic builtins
2017-09-07T02:12:33.688410Z 0 [Note] InnoDB: Uses event mutexes
2017-09-07T02:12:33.688415Z 0 [Note] InnoDB: GCC builtin
__atomic_thread_fence() is used for memory barrier
2017-09-07T02:12:33.688419Z 0 [Note] InnoDB: Compressed tables use zlib
1.2.11
2017-09-07T02:12:33.688695Z 0 [Note] InnoDB: Number of pools: 1
2017-09-07T02:12:33.688803Z 0 [Note] InnoDB: Using CPU crc32
instructions
2017-09-07T02:12:33.688810Z 0 [Note] InnoDB: Disabling background log
and ibuf IO write threads.
2017-09-07T02:12:33.690040Z 0 [Note] InnoDB: Initializing buffer pool,
total size = 1G, instances = 8, chunk size = 128M
2017-09-07T02:12:33.809821Z 0 [Note] InnoDB: Completed initialization of
buffer pool
2017-09-07T02:12:33.836689Z 0 [Note] InnoDB: Highest supported file
format is Barracuda.
2017-09-07T02:12:33.852709Z 0 [Note] InnoDB: Creating shared tablespace
for temporary tables
2017-09-07T02:12:33.852757Z 0 [ERROR] InnoDB: Operating system error
number 13 in a file operation.
2017-09-07T02:12:33.852764Z 0 [ERROR] InnoDB: The error means mysqld
does not have the access rights to the directory.
2017-09-07T02:12:33.852769Z 0 [ERROR] InnoDB: Operating system error
number 13 in a file operation.
2017-09-07T02:12:33.852774Z 0 [ERROR] InnoDB: The error means mysqld
does not have the access rights to the directory.
2017-09-07T02:12:33.852779Z 0 [ERROR] InnoDB: Cannot open datafile
'/var/db/mysql/ibtmp1'
2017-09-07T02:12:33.852784Z 0 [ERROR] InnoDB: Unable to create the
shared innodb_temporary
2017-09-07T02:12:33.852789Z 0 [ERROR] InnoDB: Plugin initialization
aborted with error Cannot open a file
2017-09-07T02:12:34.067298Z 0 [ERROR] Plugin 'InnoDB' init function
returned error.
2017-09-07T02:12:34.067315Z 0 [ERROR] Plugin 'InnoDB' registration as a
STORAGE ENGINE failed.
2017-09-07T02:12:34.067322Z 0 [ERROR] Failed to initialize plugins.
2017-09-07T02:12:34.067327Z 0 [ERROR] Aborting

For what it's worth /var/db/mysql is chmod 550, chown mysql:mysql, so
the mysql user has read access to the directory but can't create files
in it.

I've examined the source and found:

 /* Open temp-tablespace and keep it open until shutdown. */

 err = srv_open_tmp_tablespace(create_new_db, _tmp_space);

 if (err != DB_SUCCESS) {
 return(srv_init_abort(err));
 }

in storage/innobase/srv/srv0start.cc which is not wrapped with a if
(!srv_read_only_mode)

Is this a bug I am hitting or am I holding it wrong?



What appears to be missing on that page is a discussion what to do with 
the "temporary tablespace" used to hold intrinsic (internal) temporary 
tables created by different types of queries you could execute.


https://dev.mysql.com/doc/refman/5.7/en/innodb-temporary-tablespace.html

https://dev.mysql.com/doc/refman/5.7/en/internal-temporary-tables.html

If you point the setting --innodb-temp-data-file-path to a location that 
is writeable (and accessible to the user that your mysqld is running 
as), does that get you past this problem?


https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_temp_data_file_path

Think of it as allocating "scratch space" for this mysqld to "think" 
while it processes your queries against the data.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Fwd: Re: Something strange here...

2017-06-26 Thread shawn l.green

Hello List,

So sorry about the bad click. I meant to "Reply to list" but instead 
just replied to the original poster.


This is the exact same advice that hsv@  just provided. If I had paid 
attention I could have saved him the duplication of efforts. My 
apologies to him and everyone else.


Humbly embarrassed,
Shawn


 Forwarded Message 
Subject: Re: Something strange here...
Date: Wed, 14 Jun 2017 14:04:02 -0400
From: shawn l.green <shawn.l.gr...@oracle.com>
Organization: Oracle Corporation
To: Chris Knipe <sav...@savage.za.org>

Hello Chris,

On 6/13/2017 5:42 PM, Chris Knipe wrote:

Hi all,

Can someone explain to me why this is happening please:

mysql> SELECT * FROM CustomerDetails WHERE Username=’blah’\G
*** 1. row ***
  EntryID: F83D4109-DCA8-426F-98B1-00B4AF117ACB
 Username: blah
AccountVolume: 0
1 row in set (0.00 sec)

mysql> UPDATE CustomerDetails SET 
AccountVolume=GREATEST(CAST(AccountVolume-2865 AS SIGNED), CAST(0 AS SIGNED)) 
WHERE Username='blah';
ERROR 1690 (22003): BIGINT UNSIGNED value is out of range in 
'(`test`.`CustomerDetails`.`AccountVolume` - 2865)'

Using GREATEST, shouldn’t it be irrelevant whether AccountVolume-INT is signed 
or unsigned?

How would I go about doing this?  I have played quite a bit with CAST here, but 
I am not having much luck.

Thnx,
Chris.



If I read between the lines, I think you created AccountVolume as a
"BIGINT UNSIGNED" column. Right?

When you subtract something from a 0 BIGINT UNSIGNED column, you are
attempting to make a negative BIGINT UNSIGNED value (which is illegal)

Have you tried casting the column to SIGNED before the subtraction.
Instead of this...
CAST(AccountVolume-2865 AS SIGNED)

Try this
(CAST(AccountVolume AS SIGNED) - 2865)


That should get through the first part of the problem. But you still
need to re-cast the result of the GREATEST function back to an UNSIGNED
value so that it matches the type of the left side of the assignment
operator


(showing any earlier attempts to fix the problem when engaging outside
resources for help can save guessing time)


Another way to avoid this problem is to use something like an IF()
function to avoid going out of range

AccountVolume = IF(AccountVolume > 2865, AccountVolume-2865, 0)



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/
for details.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqld_multi

2017-05-20 Thread shawn l.green

Hello Matthew,

On 5/19/2017 12:19 PM, Matthew Black wrote:

I just installed MySQL Enterprise Edition 5.7 on RHEL 6.8 to replace an aging 
5.1 system running on RHEL 5. We run mysqld_multi with multiple instances, each 
database on its own TCP Port 33xx. I'm having trouble creating a database on 
the new server in a multi environment.



Can anyone provide a simple example of how to edit /etc/my.cnf file and command 
line steps necessary for creating a new database running on, for example, port 
3311?



Thanks in advance.



matthew



How to use mysql_multi is covered in the Manual. This includes a sample 
my.cnf file demonstrating how to define your separate instances.

https://dev.mysql.com/doc/refman/5.7/en/mysqld-multi.html

However, before you setup an instance to be managed by mysqld_multi, you 
will need to instantiate a set of datafiles for that 5.7 instance of the 
mysqld daemon to manage. You do that following the directions here (by 
hand) the first time.

https://dev.mysql.com/doc/refman/5.7/en/data-directory-initialization.html

This means you need to setup at least two folders (one for --datadir and 
one for --tmpdir) for each separate instance you want to create and 
assign ownership and privileges to those folders appropriate to the user 
your mysqld daemon will be executing as when it runs. There are other 
things you must also keep unique between instances when they share a 
common host machine. Those are described here:

https://dev.mysql.com/doc/refman/5.7/en/multiple-servers.html

An example of setting up the folders and assigning privileges to them is 
located in the instructions to installing a set of mysqld binaries using 
a .zip or .tar archive.  Please note, you do not need a separate mysqld 
installation for each instance you want to create. Several daemons (each 
operating on their own port, socket, folders, data files,... ) can be 
started using just one set of binary files.

https://dev.mysql.com/doc/refman/5.7/en/binary-installation.html

So... the general process would look like this (presuming you have 
already installed mysqld and setup at least one instance)

==

1) Decide where you want a second (or later) instance to store its 
files. Choose port numbers and unix socket names for this new instance 
that are unique from any other instances that will be running on this host.


2) Setup any new folders you need to create (including assigning privileges)

3) Document those names and any other settings you want this additional 
instance to use in a configuration file specific for this instance


4) Use that special configuration file to bootstrap (initialize) the 
data files used to manage that instance (the --initialize instructions 
were linked to earlier in this reply)


5) Once you have this instance setup the way you want. Shut it down.

6) Copy the elements that are unique to this instance into an 
appropriately-named section of your common configuration file (the one 
that mysqld_multi will read)


7) Test that you can start/stop this new instance using mysqld_multi

As you can tell, it takes a bit of planning and effort to establish a 
non-default setup of hosting multiple MySQL instances on the same host 
machine. There is no simple one-line command to tell mysqld_multi to 
create a new instance as there are things it cannot do (like create 
folders in your file system).


Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Ssd vs mix of ssd and spinning disk

2017-05-13 Thread shawn l.green

Hi Shain,

On 5/8/2017 1:53 PM, Shain Miley wrote:

Hello,
We have traditionally setup our mysql database servers with a mix of ssd and 
spinning disk drives.

We use the ssd drives (Raid-1) for the mysql tablespace data, and we use the 
spinning disks (15 sas in Raid-1) for the index data, etc.

I am wondering if going forward we should simply put all of the data on the ssd 
drives. Does anyone have any information on whether or not we would see any 
significant performance increase if we made this switch?

I have been thinking about using 4 ssd drives (Raid-10) going forward…and 
wondering if we should expect to see any improvement in the database 
performance.

Any thoughts?

Thanks in advance,

Shain



If you would benefit from shifting storage technologies depends on how 
limiting your current devices are to your overall throughput.


In most cases, workloads are either CPU-bound (normally due to poor 
choices in table design or query patterns) or DISK-bound (too many reads 
and writes, i/o requests,  for the device to keep up).


Occasionally systems become MEMORY-bound (normally due to poor 
configuration choices which push the system to using swap) or 
NETWORK-bound (the number of concurrent network round trips to complete 
a task is higher than the components can handle).



Of those 4 situations, which is contributing most to your total response 
latency?


For example, are you spending more time waiting for data to be buffered 
in from disk than you are computing and returning the results? If so, 
faster disks could help temporarily.


What may help more (and for a longer time) is to improve your storage 
and retrieval patterns (table and query designs) to require less 
frequent trips to disk (aka, better buffering) or to need smaller slices 
of each table (more selective indexes, querying for fewer columns, 
sharding tables, sharding data to different instances, partitioning 
data, ... ).



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: tcmalloc mysql 5.7.14

2017-05-13 Thread shawn l.green

(yes, I recognize how late this reply is)

On 5/8/2017 7:56 AM, Reindl Harald wrote:



Am 08.05.2017 um 13:51 schrieb Machiel Richards:

We are having an issue with memory allocations on mysql 5.7.14
whereby
mysql is not releasing the memory that is being allocated during
mysqldump processes.

 This has been logged as a bug with mysql dev team however they do
not
see this as a bug and suggested we test using another malloc library
such as tcmalloc.

However from what I can see, this is no longer included in mysql 5.7
and thus I am trying to find out if anyone can tell me how to enable /
install this and how to tell mysql to use this library.


does the oracle stuff not support "jemalloc" like MariaDB

MariaDB even suppports "-DWITH_JEMALLOC=system" because it's not the job
of random software ship and build random library sources in their
tarballs which usually don't get much attention in case of updates
(others than system packages)



Yes, if your system has the jemalloc library on it, MySQL can use it. 
One way to activate it is with this option to mysqld_safe.


https://dev.mysql.com/doc/refman/5.7/en/mysqld-safe.html#option_mysqld_safe_malloc-lib

And, you can control whether the InnoDB engine uses it's own memory 
management routines or those provided by the operating system:
https://dev.mysql.com/doc/refman/5.7/en/innodb-performance-use_sys_malloc.html 




Just because we don't bundle a library with our software does not mean 
that our end users cannot download one for their operating system from 
other reputable sources (like the Google devs for tcmalloc, for example)


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can not add foreign key constraint

2017-04-24 Thread shawn l.green



On 4/24/2017 2:10 PM, Peter Brawley wrote:

On 4/24/2017 12:28, David Mehler wrote:

...snip


Adding in a dummy Create Table for the missing referenced
`virtual_domains`, we have ...

drop table if exists lastauth, virtual_users, virtual_domains;

CREATE TABLE virtual_domains (

   id int PRIMARY KEY

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `virtual_users` (

   `id` int(11) NOT NULL auto_increment,

   `domain_id` int(11) NOT NULL,

   `user` varchar(40) NOT NULL,

   `password` varchar(32) NOT NULL,

   `quota` bigint(20) NOT NULL DEFAULT 256,

   `quota_messages` int(11) NOT NULL DEFAULT 0,

   PRIMARY KEY (`id`),

   UNIQUE KEY (`user`),

   FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE IF NOT EXISTS `lastauth` (

   `user` varchar(40) NOT NULL,

   `remote_ip` varchar(18) NOT NULL,

   `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE
CURRENT_TIMESTAMP,

   PRIMARY KEY (`user`),

   FOREIGN KEY (user) REFERENCES virtual_users(user) ON DELETE CASCADE

) ENGINE=InnoDB DEFAULT CHARSET=utf8;


which executes without error. To find out what's going awry in your
setup, right after the error occurs execute ...

show engine innodb_status;


and search the result for LATEST FOREIGN KEY ERROR.

PB



Typo warning:   "innodb status" is two words (no underscore)

https://dev.mysql.com/doc/refman/5.6/en/show-engine.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: DATETIME vs CHAR for "timestamp"

2017-04-14 Thread shawn l.green



On 4/14/2017 3:11 PM, SSC_perl wrote:

I have creation date/time fields in my script that are formatted as 
|MM|DD|hh|mm|ss.  Short of changing the script, should I set the field type 
in MySQL to DATETIME, or would it be better in terms of speed and efficiency to 
set it as char(19)?  Or would it not make a difference?

Thanks,
Frank




That all depends. Do you...

a) want mysqld to treat that column as an actual temporal value

or

b) want mysqld to see it as an opaque string of random alphanumeric 
characters


As you appear to have referred to this as a "creation date/time" 
tracking field it appears you want this to be treated like a temporal 
value so that you can easily do things like


SELECT ... WHERE create_date > NOW() - interval 7 days ;

If it's a temporal column, you can use functions like those in the next 
URL against it. If it's a string-type column, you can't unless you first 
convert your string into a temporal data type.


https://dev.mysql.com/doc/refman/5.6/en/date-and-time-functions.html


My suggestion is to use a native temporal data type (I recommend 
DATETIME) and that you review this section on how to format temporal 
literals (so that you can pass them easily from your application into 
MySQL)

https://dev.mysql.com/doc/refman/5.6/en/date-and-time-literals.html

Using the correct data type is important to performance. You want to 
avoid forcing the server to perform too many implicit type conversions. 
Those usually nullify any performance improvements an index on those 
columns might provide:

https://dev.mysql.com/doc/refman/5.6/en/type-conversion.html

And the native DATETIME data type only needs 8 bytes to store its data 
while your CHAR(16) may need up to 64 bytes of storage.

https://dev.mysql.com/doc/refman/5.6/en/storage-requirements.html


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL server has gone away

2017-04-03 Thread shawn l.green



On 4/3/2017 8:15 AM, Mahmood N wrote:

When I click on the submit button in Moodle and it is waiting for refresh, I 
execute the mysql command but the output is not meaningful

mahmood@ce:/var/www/html/courses$ mysql -u moodle -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 30912
Server version: 5.5.54-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show full processlist
 ->
 ->



Meanwhile using Webmin, I execute the same command for that user and see
  Output from SQL command show full processlist ..
| Id | User | Host | db | Command | Time | State | Info |
| 30912 | moodle | localhost |
  | Sleep | 42 |



I am not expert with MySQL, however as the Moodle admin I am trying to fix the 
problems.
Regards,
Mahmood



You need to consider a few possibilities,

a) Moodle didn't want to wait long enough for the query to complete (a 
Moodle Timeout) so it said "the server is not responding..."


b) Moodle sent MySQL a command that was "too large".  To protect itself 
from abuse, all MySQL instances have a configurable limit about how 
"large" a command can be. If the command is larger than this limit, the 
server rejects it and closes the connection.  (this could explain why 
the query you just attempted from Moodle is not visible in the list of 
executing commands)


c) Something is unstable in your MySQL instance. The MySQL Error Log may 
contain details explaining why mysqld was unable to stay running.  The 
angel process mysqld_safe would try to restart the server automatically 
which could explain why Moodle was only unresponsive for a short while.


Additional resources:
https://dev.mysql.com/doc/refman/5.6/en/problems.html   (in particular, 
review B.5.2)


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: check_mysql_health poll

2016-07-15 Thread shawn l.green



On 7/15/2016 6:58 AM, Johan De Meersman wrote:

Hey,

I just happened upon your poll, so I'm sending you brief mail because I have a 
different opinion still :-) I'm also CCing the MySQL list, as I feel that more 
input on this might be a good thing - and it's worth some exposure anyway.

I believe there are two distinct measures that can be taken:
* Ratio of selects that were returned from the cache against total server 
queries (caching ratio)
* Ratio of selects that were served from cache against selects that were 
inserted into the cache (statement reuse ratio)

The former gives an indication of how many queries were served from the cache 
against the total number of questions asked. It's a useful measure to see 
wether it's worth the effort to see if there's ways to rewrite queries or code 
so that more queries become cacheable. Given that 
https://dev.mysql.com/doc/refman/5.7/en/server-status-variables.html#statvar_Com_xxx
 explicitly states that queries served from cache do NOT increment com_select, 
I believe that calculation requires qcache_hits, qcache_inserts AND com_select. 
I'm not clear on wether qcache_not_cached augments com_select, though I would 
suspect it does. Even if this ratio is relatively low, it's not necessarily a 
problem - every query served from cache is a parse/exec saved. On multitenancy 
you could have a database that benefits hugely from the cache, and ten others 
that hardly use it, and that is not a problem as such.

The latter, on the other hand, tells you how many of the queries that were 
inserted into the cache, are actually served from cache afterwards. This 
requires only qcache_hits and qcache_inserts; but it is a very good measure of 
wether your query cache is actually providing any benefit - THIS is the ratio 
that should be high - if it's close to 1, it may mean you spend more time 
inserting and clearing than you save by the occasional cache hit.


So, my suggestion would be to certainly use the latter option for the 
check_mysql_health check; but it may be useful in some scenarios to have a 
separate check for the former, too.

/johan



Excellent advice.

If you read through the code, you will find that every SELECT command 
will either hit the query cache (incrementing Qcache_hits) or require 
execution to evaluate (incrementing Com_select).  So for an average of 
your Query Cache efficiency since the last restart (or the last 
statistics reset) use this formula


Efficiency in % = (Qcache_hits)/(Qcache_hits + Com_select) * 100

To get an average efficiency over a span of time, execute a SHOW GLOBAL 
STATUS report at the start of the span and another at the end of the 
span then compute that formula comparing the changes in those counters 
(the deltas).


Another way to look at reuse rate is to estimate how quickly you are 
turning over the content of the Query Cache.  Let's say your 
Qcache_inserts rate is about 500/sec and on average you have about 5000 
queries in the cache.  This gives you a very rough lifetime of about 10 
seconds for any single query result in the cache.  If you are not seeing 
a lot of lowmem prunes during this period, then those existing query 
results are not being forced out of the cache due to space restrictions 
(age), they are most likely being removed automatically due to changes 
happening to the tables they are based on.


In most cases, you gain efficiency by removing the mutex that protects 
the content of the Query Cache and allowing all incoming commands to 
execute in parallel rather than being serialized via that cache mutex. 
You do this by setting --query-cache-type=0 (or OFF) not just by 
allocating no space to the buffer. This is particularly true if you

* have a low reuse rate
* have a high churn rate
* do not have a large population of queries that are repeated (exactly) 
against sets of tables that change rarely.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: signal handling in mysql cli

2016-06-30 Thread shawn l.green

Hello Sebastien,

You should stop top-posting, it inverts the flow of your investigation 
making it harder to follow. (see bottom)


On 6/21/2016 12:45 PM, Sebastien FLAESCH wrote:

The process list show my (killed) thread as follows:

mysql> show processlist;
++-+-+---+-+--+--+--+

| Id | User| Host| db| Command | Time | State|
Info |
++-+-+---+-+--+--+--+

| 20 | root| localhost:48203 | test1 | Query   |0 | starting |
show processlist |
| 21 | mysuser | localhost:48209 | test1 | Sleep   |  182 |  |
NULL |
++-+-+---+-+--+--+--+


Why it is in "Sleep" state?!?

Seb

On 06/21/2016 06:27 PM, Sebastien FLAESCH wrote:

FYI, I get the same problem with MySQL 5.7.13.

Seb

On 06/21/2016 04:59 PM, Sebastien FLAESCH wrote:

Seems that after KILL QUERY mysql-thread-id, a call to
mysql_stmt_close(stmt-handle) hangs...

This did not happen in 5.6 ...

Will try 5.7.13 ...

Seb

On 06/21/2016 04:03 PM, Sebastien FLAESCH wrote:

Hi all,

The technique described in this thread is working fine with MySQL
5.6 (libmysqlclient).

Basically, in a SIGINT signal handler, we establish a new connection
to perform a

KILL QUERY pid

...

But with 5.7 (5.7.11) we get now a different result:

A) The query is still interrupted, but we no longer get an SQL error
-1317.

B) For some reason, the program does not want to exit() - (must
investigate)

Any clues?


With mysql it's working fine:

mysql> select sleep(10);
^C^C -- query aborted
+---+
| sleep(10) |
+---+
+---+
1 row in set (2.79 sec)

mysql> \q
Bye



Thanks!
Seb

On 12/03/2014 05:25 PM, Sebastien FLAESCH wrote:

Hi all,

I have a similar question regarding KILL QUERY usage:

We have a C client program using libmysqlclient.so, it is a
single-threaded program.

When running a long query, how can I send the KILL QUERY command
when a SIGINT (CTRL-C)
is caught? (of course we implement a signal handler, so we keep the
control)

=> Is is safe to establish a new connection to the server in the
signal handler, using
mysql_init() + mysql_real_connect(), and execute the KILL QUERY
with the mysql thread
id I got from the initial mysql_init() / mysql_real_connect()?

I made some tests, and it seems to work fine, the long query
returns SQL error -1317:
"Query execution was interrupted" (which is expected)

We want to support SQL interruption properly, so please someone
from the dev team,
give me a clear answer... I don't want to use a side effect or
undocumented feature.

Doing all this stuff in a signal handler is certainly risky... no?

I could not find the information in the documentation
(mysql_real_connect).

I wish there would be an API like mysql_cancel_query(), similar to
Oracle's OCI
OCIBreak().

Thanks!
Seb



On 12/02/2014 05:13 PM, walter harms wrote:

hi list,

when i use CTRL-C to break a query that works fine in interactive
mode.

mysql> select sleep(10) ;
^CCtrl-C -- sending "KILL QUERY 24289" to server ...
Ctrl-C -- query aborted.
+---+
| sleep(10) |
+---+
+---+
1 row in set (0.86 sec)

but when i use the noninteractive mode
   timeout 5 mysql -BAN -e "select now(); select sleep (100) ;
select now() "

i looks like that  but "show full processlist;" shows otherwise
and that is true
as a list of long running querys showed.

Is there a way to make it behave like the interactive version ?
Now it is a bit confusing for everyone.

re,
  wh

















You did not KILL the entire client session(CONNECTION), you only killed 
the QUERY that the session was executing. The client remains connected 
and the session remains active. The reason it is in Sleep state is 
because the server is waiting for the client to send its next command.


http://dev.mysql.com/doc/refman/5.7/en/kill.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Adding values returned by GREATEST

2016-05-14 Thread shawn l.green



On 5/14/2016 2:57 PM, Peter Brawley wrote:

On 5/14/2016 11:16, shawn l.green wrote:

Hello Sukhjinder,

On 5/3/2016 8:55 AM, Sukhjinder K. Narula wrote:

Hello,

i have a question regarding the GREATEST function of mysql.

I would like to add the values returned by GREATEST function is
mysql, so a
query is like below:

For example table t has 6 fields with values as follows: A = 1, B =
3, C=0,
D = 0, E = 1 and F = 0 and I run a query:

SELECT
GREATEST (A, B, C) AS G1,
GREATEST (D, E, F) AS G2,
(
GREATEST (A, B, C) + GREATEST(D, E, F)

) AS Total
FROM t

The result row I expect is: 3, 1, 4
But I get 3, 1, 6

However when I run the query like below I get correct results as total
being 4:

SELECT
(
GREATEST (1, 3, 0) + GREATEST(0,1,0)

) AS Total

So what I noticed is as I add result from GREATEST function, the
result is
adding 1 for each GREATEST call I have in total. So, if I change my
query
as below:

SELECT
GREATEST (A, B, C) AS G1,
GREATEST (D, E, F) AS G2,
(
GREATEST (A, B, C) + GREATEST(D, E, F) + GREATEST(D, E, F)

) AS Total
FROM t

The results will be 3, 1, 8

GREATEST (A, B, C) + GREATEST(D, E, F) + GREATEST(D, E, F) is
calculated as

GREATEST (A, B, C)  = 3 + 1
GREATEST(D, E, F) = 1 +1
GREATEST(D, E, F) = 1 +1

So the total is 8.

I have tried online to search for this type of behaviour but no luck.
Can
anyone please explain this.

Many Thanks,
SK


I attempted to reproduce this problem but was unable to do so.
(testing with 5.7.11)

localhost.test>SELECT @@version;
+---+
| @@version |
+---+
| 5.7.11-enterprise-commercial-advanced |
+---+

localhost.(none)>select greatest(1,3,0), greatest(0,1,0),
greatest(1,3,0)+ greatest(0,1,0) as total
-> ;
+-+-+---+
| greatest(1,3,0) | greatest(0,1,0) | total |
+-+-+---+
|   3 |   1 | 4 |
+-+-+---+
1 row in set (0.00 sec)

localhost.(none)>set @A = 1, @B = 3, @C=0, @D = 0, @E = 1, @F = 0;
Query OK, 0 rows affected (0.00 sec)

localhost.(none)>select greatest(@a,@b,@c), greatest(@d,@e,@f),
greatest(@a,@b,@c)+ greatest(@d,@e,@f) as total;
+++---+
| greatest(@a,@b,@c) | greatest(@d,@e,@f) | total |
+++---+
|  3 |  1 | 4 |
+++---+
1 row in set (0.00 sec)

localhost.(none)>create database test;
Query OK, 1 row affected (0.00 sec)

localhost.(none)>use test
Database changed
localhost.test>create table t1 (a int, b int, c int, d int, e int, f
int);
Query OK, 0 rows affected (0.23 sec)

localhost.test>insert t1 values (1,3,0,0,1,0);
Query OK, 1 row affected (0.03 sec)

localhost.test>select greatest(a,b,c), greatest(d,e,f),
greatest(a,b,c)+ greatest(d,e,f) as total from t1;
+-+-+---+
| greatest(a,b,c) | greatest(d,e,f) | total |
+-+-+---+
|   3 |   1 | 4 |
+-+-+---+
1 row in set (0.00 sec)

localhost.test>select greatest(a,b,c)+ greatest(d,e,f)+
greatest(d,e,f) as total from t1;
+---+
| total |
+---+
| 5 |
+---+
1 row in set (0.00 sec)

Can you provide a more complete test case?
Can you tell us which version of MySQL you are using?


He asked this in the Newbie forum last month. The column is Enum, with
whose ambiguities Greatest() can produce odd-looking arithmetic ...

drop table if exists t;
create table t(i enum('2','1','3'), j enum('5','2','8') );
insert into t values('1','1');
select greatest(i,j) from t;
+---+
| greatest(i,j) |
+---+
| 5 |
+---+
select greatest(i+0,j+0) from t;
+---+
| greatest(i+0,j+0) |
+---+
| 2 |
+---+

PB



Thanks Peter!

Yes, using ENUMS instead of actual numeric values can easily make 
everything act weird. Sometimes you see the position within the ENUM of 
the matching value, sometimes you see the value. It all depends on how 
you reference the column:



For everyone else, remember: ENUM is a way to store only a specific set 
of string values into a column. We even document how confusing it can be 
if you attempt to work with it as a set of numeric constants.

Quoting from http://dev.mysql.com/doc/refman/5.7/en/enum.html
> If you store a number into an ENUM column, the number is treated as
> the index into the possible values, and the value stored is the
> enumeration member with that index. (However, this does not work
> with LOAD DATA, which treats all input as strings.) If the numeric
> value is quoted, it is still interpreted as an index if there is no
> matching string in 

Re: Adding values returned by GREATEST

2016-05-14 Thread shawn l.green

Hello Sukhjinder,

On 5/3/2016 8:55 AM, Sukhjinder K. Narula wrote:

Hello,

i have a question regarding the GREATEST function of mysql.

I would like to add the values returned by GREATEST function is mysql, so a
query is like below:

For example table t has 6 fields with values as follows: A = 1, B = 3, C=0,
D = 0, E = 1 and F = 0 and I run a query:

SELECT
GREATEST (A, B, C) AS G1,
GREATEST (D, E, F) AS G2,
(
GREATEST (A, B, C) + GREATEST(D, E, F)

) AS Total
FROM t

The result row I expect is: 3, 1, 4
But I get 3, 1, 6

However when I run the query like below I get correct results as total
being 4:

SELECT
(
GREATEST (1, 3, 0) + GREATEST(0,1,0)

) AS Total

So what I noticed is as I add result from GREATEST function, the result is
adding 1 for each GREATEST call I have in total. So, if I change my query
as below:

SELECT
GREATEST (A, B, C) AS G1,
GREATEST (D, E, F) AS G2,
(
GREATEST (A, B, C) + GREATEST(D, E, F) + GREATEST(D, E, F)

) AS Total
FROM t

The results will be 3, 1, 8

GREATEST (A, B, C) + GREATEST(D, E, F) + GREATEST(D, E, F) is calculated as

GREATEST (A, B, C)  = 3 + 1
GREATEST(D, E, F) = 1 +1
GREATEST(D, E, F) = 1 +1

So the total is 8.

I have tried online to search for this type of behaviour but no luck. Can
anyone please explain this.

Many Thanks,
SK


I attempted to reproduce this problem but was unable to do so.
(testing with 5.7.11)

localhost.test>SELECT @@version;
+---+
| @@version |
+---+
| 5.7.11-enterprise-commercial-advanced |
+---+

localhost.(none)>select greatest(1,3,0), greatest(0,1,0), 
greatest(1,3,0)+ greatest(0,1,0) as total

-> ;
+-+-+---+
| greatest(1,3,0) | greatest(0,1,0) | total |
+-+-+---+
|   3 |   1 | 4 |
+-+-+---+
1 row in set (0.00 sec)

localhost.(none)>set @A = 1, @B = 3, @C=0, @D = 0, @E = 1, @F = 0;
Query OK, 0 rows affected (0.00 sec)

localhost.(none)>select greatest(@a,@b,@c), greatest(@d,@e,@f), 
greatest(@a,@b,@c)+ greatest(@d,@e,@f) as total;

+++---+
| greatest(@a,@b,@c) | greatest(@d,@e,@f) | total |
+++---+
|  3 |  1 | 4 |
+++---+
1 row in set (0.00 sec)

localhost.(none)>create database test;
Query OK, 1 row affected (0.00 sec)

localhost.(none)>use test
Database changed
localhost.test>create table t1 (a int, b int, c int, d int, e int, f int);
Query OK, 0 rows affected (0.23 sec)

localhost.test>insert t1 values (1,3,0,0,1,0);
Query OK, 1 row affected (0.03 sec)

localhost.test>select greatest(a,b,c), greatest(d,e,f), 
greatest(a,b,c)+ greatest(d,e,f) as total from t1;

+-+-+---+
| greatest(a,b,c) | greatest(d,e,f) | total |
+-+-+---+
|   3 |   1 | 4 |
+-+-+---+
1 row in set (0.00 sec)

localhost.test>select greatest(a,b,c)+ greatest(d,e,f)+ greatest(d,e,f) 
as total from t1;

+---+
| total |
+---+
| 5 |
+---+
1 row in set (0.00 sec)

Can you provide a more complete test case?
Can you tell us which version of MySQL you are using?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: parallel installations of mysql

2016-04-21 Thread shawn l.green



On 4/20/2016 2:04 PM, Martin Mueller wrote:


I am running MySQL 5.6.22 on an iMac as a desktop database. I would like to 
install 5.7.12. Can I install it as a parallel and independent  instance? And 
if so, are there special problems to watch out for?





Why would I want to do this? Well, I have a set of databases and tables on the 
old installations that have grown over the years. Given the way I work, the 
simplest thing would be install the new database and then work through my 
existing tables over a number of weeks and transfer stuff as I go along.   That 
may not be very professional but it works for me, and it would let me keep the 
old along the new, just in case something goes wrong/

My friends tell me to use sqlite, and they are probably right since file 
management is so much simpler. But I find the many builtin functions of MySQL 
very helpful and don't particularly want to learn a new set.

Martin Mueller



Many systems have more than one mysqld running on them at the same time. 
To make them operate safely, you have to isolate them from each other 
using the guidance in this section of the manual:


http://dev.mysql.com/doc/refman/5.7/en/multiple-servers.html


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: threads in cleaning up mode

2016-04-04 Thread shawn l.green



On 4/2/2016 10:36 AM, geetanjali mehra wrote:

Hi to all,

For many days, I am struggling with the problem of  increasing history list
length on mysql 5.6.29. Application that this server is running IOT based.
Also, this server has so many threads running in sleeping mode. SHOW ENGINE
INNODB STATUS shows all these threads in *cleaning up* mode.
I tried all the options to reduce history list length. But it is constantly
increasing.
Below are the current settings of purge related threads:

innodb_max_purge_lag   | 100 |
| innodb_max_purge_lag_delay | 0   |
| innodb_purge_batch_size| 1   |
| innodb_purge_threads   | 8   |

Also, please let me know that whether cleaning up mode of threads and
history list length are correlated.

Anty help?


Best Regards,
Geetanjali Mehra
Senior Database Administrator



No, the "cleaning up" status and your ever-increasing history are not 
related. The "cleaning up" status is a very low-impact bug in our code. 
We simply forgot to reset a flag at the end of that section of the code. 
It is not indicative of any ongoing operations.


What has happened is one of those idle threads (the sleeping ones) has 
opened a transaction but never committed it. That is why your history 
list is never shrinking.  As was already said in another response, this 
is an indication of poor client-side transaction management. If you 
start a transaction, you must always terminate it or situations like 
yours will happen. (This is also one of the dangers if someone changes 
autocommit=0 and forgets to change it back.)



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread shawn l.green



On 4/1/2016 10:08 AM, Lentes, Bernd wrote:



- On Apr 1, 2016, at 3:12 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

Btw:
i read about isolation levels. REPEATABLE READ is the default for InnoDB.
http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_repeatable_read says:

"...so that all queries within a transaction see data from the same snapshot,
that is, the data as it was at the time the transaction started.".

http://dev.mysql.com/doc/refman/5.5/en/glossary.html#glos_consistent_read says:

"With the repeatable read isolation level, the snapshot is based on the time
when the first read operation is performed".

What is true ? when the transaction started or when the first read is performed 
?



Until you need to establish a snapshot of the data, then you don't need 
a snapshot position.


The transaction physically begins (rows begin to be protected against 
changes by other transactions) with the first read.


Consider the alternative: If we started protecting data with the START 
TRANSACTION command we would need to protect every row in every table in 
every database.  That is simply not efficient.


We protect the pages that contain the rows that are physically required 
by the individual transaction. This is a much smaller locking footprint 
and is much easier to manage.





Bernd




--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-04-01 Thread shawn l.green



On 4/1/2016 9:12 AM, Lentes, Bernd wrote:

- On Mar 25, 2016, at 9:54 PM, shawn l.green shawn.l.gr...@oracle.com wrote:



"Unsafe" in that sense replies to the fact that certain commands can
have a different effect when processed from the Binary Log than they did
when they were executed originally on the system that wrote the Binary
Log. This would be true for both a point-in-time recovery situation and
for replication. The topic of unsafe commands is covered rather well on
these pages:
http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

This is particularly true for commands that may cross transactional
boundaries and change non-transactional tables.  The effect of those
commands are apparent immediately to any other user of the server. They
do not rely on the original transaction to complete with a COMMIT. The
workaround we employed was to keep the non-transactional table locked
(to keep others from altering it) until the transaction completes
(COMMIT or ROLLBACK). That way we do our best to make all changes
"permanent" at the same time.



Hi,

oh my god. The more i read the more i'm getting confused. I totally underrated 
replication.
But i will not give up ;-) And i appreciate your help, Shawn.
What do you mean with the workaround ? Does MySQL this automatically or has it 
be done
in the app code ?



It's inside the server. You don't need to do anything as a user.



You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is the mix of MyISAM and InnoDB a problem with Row-Based-Logging or
with Statement-Based-Logging or with both ?




Both.



Look at this sequence and think what would happen without that "stronger
locking" you mentioned earlier.

(connection 1)
   begin transaction
   INSERT myisam_table SELECT ... FROM InnoDB_table WHERE ...
  (connection 2)
  DELETE myisam_table WHERE ...  (this removes one of the rows that
  connection 1 just added)

(end of connection 2)

(connection 1)
   COMMIT

When the slave sees this sequence, it will get the command from
Connection2 first (it completed first so it winds up in the Binary Log).
It removed 8 rows on the master but it would only see 7 on the slave.
Why? The 8th row has not been added to the MyISAM table on the slave
because the transaction that does it hasn't been recorded to the Binary
Log yet.

That's why there is stronger locking comes into play. If we had not
blocked connection 2 until connection 1 completed things would be out of
temporally speaking. It's still possible for things to happen out of
sequence on the slave when mixing transactional and non-transactional
tables in the same transaction.



I don't understand the example:
Does "begin transaction" and "COMMIT" have any influence on the insert ?
 From what i understand a myisam table does not support transactions,
so it should not care about "begin transaction" and "commit".
So the insert should be done immediately. The select on the InnoDB also
should not wait, because it's applied without "LOCK IN SHARE MODE".
So x lines are added immediately. This is done on the master, written in the log
and then replicated to the slave, which also adds x lines.
Then connection 2 deletes 8 rows, one is from the previous insert.
First on the master and then on the slave.
I assume that the connections are established in the order they appear here
(connection 2 is established after the insert in connection 1).
So on both 8 rows are deleted.





You said, "This is done on the master, written in the log and then 
replicated to the slave, "


The INSERT would not appear in the Binary log until after session 1 
commits. Even if session 1 does a rollback, you would still see the 
entire transaction including the ROLLBACK. We have to do it that way to 
preserve the transaction isolation of the InnoDB data.


Yes, you read the shorthand correctly and in the correct temporal sequence.
  session1 did two commands.
  session2 issued one command.
  session1 did a commit.

It does not matter of the sessions were created in that order or not. 
Only the sequence in which the commands are executed matters.






This takes us to the next point you have...

The doc says: "Due to concurrency issues, a slave can become
inconsistent when a transaction contains updates to both transactional
and nontransactional tables. MySQL tries to preserve causality among
these statements by writing nontransactional statements to the
transaction cache, which is flushed upon commit. However, problems arise
when modifications done to nontransactional tables on behalf of a
transaction becom

Re: need help from the list admin

2016-03-30 Thread shawn l.green



On 3/30/2016 1:26 PM, Lentes, Bernd wrote:

- On Mar 30, 2016, at 7:04 PM, Reindl Harald h.rei...@thelounge.net wrote:



So i should use the default (autocommit=1)?


no, you should what is appropriate for your application

if you don't care about inserts/updates triggered by let say a
webrequest are half written due a crash or restart use autocommit


Autocommit means that every statement is committed implicitly. Right ?
Commit works only in conjunction with InnoDB tables and transaction. That's 
what i understand.
I thought when i make e.g. an insert into a InnoDB table, and that insert is 
not done completely (due to a crash, restart, what ever)
it is rolled back automatically after the restart. Is that wrong ?



it depends:  If the transaction made it into the Binary Log (if it is 
enabled) and the REDO log as "committed", then InnoDB will finish the 
commit (put the actual data in its proper place in the data files) after 
recovery. If not, it will rollback and your data remains as it was.


http://dev.mysql.com/doc/refman/5.6/en/innodb-recovery.html



if you care that all or nothing is written use transactions
if you care that way don't mix non-transactional tables with innodb


I'm planning to convert the MyISAM tables to InnoDB.



That will solve many of your data consistency problems (particularly 
those related to how things are recorded in the Binary Log), presuming 
you surround changes that involve multiple commands with transaction 
control commands.


If your sets of data changes only need one command to complete, then the 
overhead of issuing explicit START TRANSACTION and COMMIT commands is 
just going to create work you don't need for your workflow. If you need 
more than one command to make a complete and consistent update to your 
data, then use a transaction. If not, operating in autocommit mode is 
ideal.




Bernd



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-28 Thread shawn l.green

Hello Bernd,

On 3/28/2016 3:36 PM, Lentes, Bernd wrote:



- On Mar 27, 2016, at 2:49 PM, Reindl Harald h.rei...@thelounge.net wrote:


Am 27.03.2016 um 14:34 schrieb Lentes, Bernd:

You would be better served by first converting your MyISAM tables to
InnoDB to stop mixing storage engine behaviors (transactional and
non-transactional) within the scope of a single transaction. But if you
cannot convert them, using MIXED will be a good compromise.


Is this a big problem ? Something to take care of ? Currently we have a mix.
I will ask the girl who developed it why we have both kinds. I hope i can
convert


surely - when you have non-transactional tables involved in
updates/inserts you can go and forget using transactions at all since
interruption or rollback would not rollback already written changes in
MyISAM tables

transactions are all about consistency - impossible with a mix of InnoDB
and MyISAM tables


I read that the converting is not difficult. But has the code of our webapp to 
be changed ? It's written in php and perl.
What i understand is that inserts/updates/deletions in InnoDB tables have to be 
commited. Yes ?


No. The server's default is to have --autocommit=1, which means that 
there is an implicit commit at the end of every command. You do not need 
to state explicitly "COMMIT" every time you want this to happen.


In fact, disabling autocommit has gotten many new users into trouble 
because they did not understand the behavior they changed.



This has to be done in the code ? Or can we use the system variable autocommit ?


You should need to change nothing.


That means that everything is commited immediately ? Is this a good solution ?


It is going to behave better than the data you have now. The changes to 
the tables you will convert from MyISAM to InnoDB will not become 
visible to other sessions until after the COMMIT (implicit or explicit) 
completes. For finer-grained control over data visibility, you need to 
understand the broader topic of transaction isolation.



What means "By default, client connections begin with autocommit set to 1" in 
the doc ?


It means that every command is already running in its own private 
mini-transaction. To start a multi-statement transaction you do not need 
to disable autocommit, you simply need to use the START TRANSACTION 
command.


Here is a reference from the 5.0 manual to illustrate that this behavior 
has been around for a long time:

http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-transactions.html


That every client connection established via perl/php is started with 
autocommit=1 ?


It is as long as:

1) the global variable autocommit=1
2) the client does nothing to change its own session variable to 
autocommit=0



And when does the commit happen ? When the connection is closed ? Is that 
helpful ?



The commit happens at the end of each command. If you need to contain 
multiple commands within a single transaction, use START TRANSACTION and 
COMMIT.





Bernd




--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql query for current date accounting returns NULL

2016-03-26 Thread shawn l.green



On 3/26/2016 4:36 PM, shawn l.green wrote:



On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:

I have Freeradius 2.x with MySQL 5.5 in Ubuntu.

I want to query user quota for current date. I am using following code

SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM
radacct where (acctstarttime between  DATE_FORMAT(NOW(),'%Y-%m-%d')
AND NOW() AND acctstoptime  between  DATE_FORMAT(NOW() ,'%Y-%m-%d')
AND NOW()) AND radacct.username='%{User-Name}'

It works fine if there is acctstoptime value in table. but if user
have not disconnected yet (and have no previous session for today) it
returns NULL.

So how can i can get the value even if user acttstoptime is null?




Try this...(using an earlier suggestion to the thread)

SELECT
  SUM(acctinputoctets + acctoutputoctets) AS Total
FROM radacct
WHERE
   radacct.username='%{User-Name}'
   AND acctstarttime BETWEEN CURDATE() AND NOW()
   AND (
 acctstoptime  <= NOW()
 OR acctstoptime IS NULL
   )

But in reality, can you have an acctstarttime that is >= NOW()? If not,
then you can also simplify that term to just



oops! one too many AND's

   AND AND acctstarttime >= CURDATE()


I meant to write
AND acctstarttime >= CURDATE()



and lose the BETWEEN comparison.



--
Shawn

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql query for current date accounting returns NULL

2016-03-26 Thread shawn l.green



On 3/25/2016 6:39 AM, JAHANZAIB SYED wrote:

I have Freeradius 2.x with MySQL 5.5 in Ubuntu.

I want to query user quota for current date. I am using following code

SELECT (SUM(acctinputoctets)+SUM(acctoutputoctets)) AS Total FROM radacct where 
(acctstarttime between  DATE_FORMAT(NOW(),'%Y-%m-%d') AND NOW() AND 
acctstoptime  between  DATE_FORMAT(NOW() ,'%Y-%m-%d') AND NOW()) AND 
radacct.username='%{User-Name}'

It works fine if there is acctstoptime value in table. but if user have not 
disconnected yet (and have no previous session for today) it returns NULL.

So how can i can get the value even if user acttstoptime is null?




Try this...(using an earlier suggestion to the thread)

SELECT
 SUM(acctinputoctets + acctoutputoctets) AS Total
FROM radacct
WHERE
  radacct.username='%{User-Name}'
  AND acctstarttime BETWEEN CURDATE() AND NOW()
  AND (
acctstoptime  <= NOW()
OR acctstoptime IS NULL
  )

But in reality, can you have an acctstarttime that is >= NOW()? If not, 
then you can also simplify that term to just


  AND AND acctstarttime >= CURDATE()

and lose the BETWEEN comparison.

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: need help from the list admin

2016-03-25 Thread shawn l.green

Hello Bernd,

Sorry for the delay, I wanted to make sure I had enough time to address 
all of your points.


On 3/22/2016 7:07 AM, william drescher wrote:

sent for Bernd, and to see if it works from another sender
--
  Lentes, Bernd wrote:
Hi,

i know that there is a list dedicated to replication, but when you have
a look in the archive it's nearly complete empty. Really not busy.
So i hope it's ok if i ask here.
we have a web app which runs a MySQL DB and dynamic webpages with perl
and apache httpd. Webpages serve reading and writing into the db. The db
is important for our own work flow, so i'd like to make it HA. I have
two HP servers and will use SLES 11 SP4 64bit as OS. MySQL is 5.5.47.
For HA i'd like to use pacemaker, which is available in SLES High
Availibility Extension. I have experience in linux, but i'm not a
database administrator nor developer. HA is important for us, we don't
have performance problems.
My first idea was to run the web app and the db in a virtual machine on
the host and in case of a failure of one host pacemaker would run the vm
on the other host. VM would be stored on a FC SAN. I stopped following
this idea. I have bought a book about HA: "..." from Oliver Liebel. It's
only available in german. But i can recommend it, it's very detailed and
well explained.
He proposed to have two hosts, and on each is running a MySQL instance
as master AND slave. But it's not a "real multi master solution",
because pacemaker takes care that the IP for the web app just points to
one master. So i don't have the multi-master problems with concurrent
inserts (i believe).


This is wise advice. We (MySQL Support) often recommend exactly the same 
setup:  a master + one(or more) slave(s) using replication to keep the 
slaves in relative sync. I say "relative" because replication is 
asynchronous.


All writes are directed at the master. Clients that can tolerate the 
natural lag of the replication system can use any available slave for 
read-only queries.



His idea is that host A is master for the slave on host B, and host B is
the master for the slave on host A. OK ?
Let's imagining that the IP to the web app points to host A, inserts are
done to the master on host A and replicated to the slave on host B. Now
host A has problems, pacemaker redirects the IP to host B, and
everything should be fine.
What do you think about this setup ? Where is the advantage to a
"classical Master/Slave Replication" ? How should i configure
log-slave-updates in this scenario ?


We have a page on that in the manual (with a diagram):
http://dev.mysql.com/doc/refman/5.6/en/replication-solutions-switch.html



Let's imagine i have two hosts again: Host A is master, host B is slave.
Nothing else. No real or pseudo "Multi-Master". IP points to host A.
Host A has problems, pacemaker recognizes it, promotes B to master and
pivot the IP. Everything should be fine. Where is the disadvantage of
this setup compared to the "Multi-Master Replication" in the book ? The
OCF ressource agent for mysql should be able to handle the mysql stuff
and the RA for the IP pivots the IP.



Remember to wait for the slave to catch up to the master it lost contact 
with. That way its data is as current as possible. Then redirect your 
clients to the new read-write node in your replication topology.




Now some dedicated questions to replication. I read a lot in the
official documentation, but some things are not clear to me.
In our db we have MyISAM and InnoDB tables.

 From what i read i'd prefer row based replication. The doc says is the
safest approach. But there seems to be still some problems:

The doc says: "For tables using the MYISAM storage engine, a stronger
lock is required on the slave for INSERT statements when applying them
as row-based events to the binary log than when applying them as
statements. This means that concurrent inserts on MyISAM tables are not
supported when using row-based replication."
What does this exactly mean ? Concurrent inserts in MyISAM-tables are
not possible if using RBL ? Or unsafe in the meaning they create
inconsistencies ?



"Unsafe" in that sense replies to the fact that certain commands can 
have a different effect when processed from the Binary Log than they did 
when they were executed originally on the system that wrote the Binary 
Log. This would be true for both a point-in-time recovery situation and 
for replication. The topic of unsafe commands is covered rather well on 
these pages:

http://dev.mysql.com/doc/refman/5.6/en/replication-rbr-safe-unsafe.html
http://dev.mysql.com/doc/refman/5.6/en/replication-sbr-rbr.html

This is particularly true for commands that may cross transactional 
boundaries and change non-transactional tables.  The effect of those 
commands are apparent immediately to any other user of the server. They 
do not rely on the original transaction to complete with a COMMIT. The 
workaround we employed was to keep the 

Re: can I just encrypt tables? what about the app?

2016-03-03 Thread shawn l.green



On 3/3/2016 10:40 AM, lejeczek wrote:

On 02/03/16 00:51, shawn l.green wrote:



On 3/1/2016 6:26 PM, lejeczek wrote:



On 29/02/16 21:35, shawn l.green wrote:



On 2/29/2016 3:13 PM, Reindl Harald wrote:



Am 29.02.2016 um 20:54 schrieb Gary Smith:

On 29/02/2016 19:50, Reindl Harald wrote:


cryptsetup/luks can achieve that way better


Only to a degree.


no - not only to a degree - when the question is "not store anything
unencrypted on the disk" the is no degree, but or if


Once the disk is unencrypted, you've got access to the
filesystem. If you've got physical access to the machine, then
anything
which gives you console access gives you (potentially) access to the
underlying database files. If you can get those, it's trivial to get
access to the dataset that they contain.

However, if TDE is employed, then you've got another significant
obstacle to overcome: The data is only encrypted (aiui) once it's in
memory. At this point, you're needing to do attacks on RAM to get
access
to the data - and even then, you're unlikely to get 3 bars for a
jackpot
payout of the whole database schema, assuming a decent sized
database.


in theory

in reality you don't need to hack around in the RAM - mysqld needs to
have access to key for operate with the data and so you need to find
only that piece

the same for encryption on the application side before send data to
the
db-layer - see the start and subject of that thread how far people are
away from understanding how and on what layer things are encrypted and
what excatly is protected in which context

there is no "turn this on and you are safe" without deeper
understanding



Correct. As long as the key and the lock are on the same machine,
there will be some way of opening that lock. It's just a matter of how
hard can you make it to find that key. No data is perfectly safe. No
crypto is unbreakable. Ever.

Maybe the key only exists in memory while the daemon runs? You can
hack the memory to find the key.

Maybe the key is retrieved from another key service daemon. If you
have the credentials to impersonate a valid retriever, you are in the
money.

The purpose of any encryption system is not to make it impossible to
read the data. It's purpose is to make it impractically hard for any
unauthorized parties to read it.

taking your last line and making and assumption or two, notion of double
encryption arises - will it work?




A system called "Triple DES" does exactly what you propose and appears
to be in wide usage.
https://en.wikipedia.org/wiki/Triple_DES

The key to avoiding brute force attacks is not how many times you
scramble the data, but how long your key is. In the early days of
computers, keys were short because processing power was less. In
today's world, you must use longer keys just to stay ahead of Moore's
Law.

Quoting from
http://www.welivesecurity.com/2016/02/17/how-is-cryptography-incorporated-into-pos-terminals/


For example, DES with a 56-bit key (2^56 possible combinations) can
be broken in less than a day, since average computers can perform a
billion operations per second. However, the addition of more bits to
the string will exponentially increase the time required to crack it.



Most SSL keys (for example, those used to encrypt the information
exchanged when you visit "secure" web sites) should all have keys that
are 2048 bits or longer. If they don't already, I'll bet they are
upgrading their certificates soon.
http://news.netcraft.com/archives/2012/09/10/minimum-rsa-public-key-lengths-guidelines-or-rules.html




how to backup in a way that this in-database-encryption will be taken
advantage of?
does any of present backup solutions can do it?
many thanks.



As the new encryption layer we are discussing (TDE) is between the 
storage engine and the physical file (the data in the file is 
encrypted), then any technique for doing safe file-level backups will 
preserve the encryption.


Examples:
  cold backups (copying off the files after stopping the daemon)
  FTWRL + wait for background threads to complete their queues + file 
system snapshot
  MySQL Enterprise Backup (coming soon for TDE tables; we are still 
working out some early bugs between TDE and MEB)


Any technique that reads the decrypted data and transcribes it to text 
would not be a backup technique that preserves that encryption.

Example: mysqldump


(NOTE: "FTWRL" is a shorthand for the command FLUSH TABLES WITH READ 
LOCK. It can save a lot of typing. )


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: characters oddity

2016-03-02 Thread shawn l.green



On 3/2/2016 11:29 AM, McGranahan, Jamen wrote:

Have two virtual machines, both running RedHat 7. Both are also running MySQL 
5.6.29 and both have the same data. We have two databases, however, that keep 
throwing odd characters on one system but it's OK on the other and we've not 
been able to figure out why.

What it should look like (from the test machine, pointing to the database on 
the test machine):
Décimas a la censura de Carmen Aristegui
Guillermo Velázquez Benavidez

What it looks like on our Production database (from the test machine, pointing 
to the production database):
Décimas a la censura de Carmen Aristegui
Guillermo Velázquez Benavidez

We have verified the my.cnf is the same on both machines, using utf8 as the 
default character set. We have also verified the character sets for the 
databases and tables are identical. We know it has to be something with the 
MySQL database on our Production server because we can point Production to the 
Test database and it the characters are translated correctly. But we just 
haven't been able to figure out what it is - and it's been 48 hours worth of 
work and investigation. Any advice, guidance, or suggestions would be greatly 
appreciated!  Thank you!

Jamen McGranahan
Systems Services Librarian
Vanderbilt University LIbrary
Central Library
Room 811
419 21st Avenue South
Nashville, TN 37214




Looks like one of your clients is not recognizing the data as part of a 
multi-byte character sets or that the data was loaded into the second 
(production) system after being transcoded via a single-byte character 
set or that your display terminal is not using the right code page to 
show you the correct characters (less likely since you are using the 
same client on the same machine).


If you compare the HEX() of both fields on both systems, do they match? 
If so then it's a client-side translation error. If they do not match, 
then you introduced the extra characters during the dump/restore 
process. The "é" is what the "é" would look like if you were reading it 
in latin1.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-03-01 Thread shawn l.green



On 3/1/2016 6:26 PM, lejeczek wrote:



On 29/02/16 21:35, shawn l.green wrote:



On 2/29/2016 3:13 PM, Reindl Harald wrote:



Am 29.02.2016 um 20:54 schrieb Gary Smith:

On 29/02/2016 19:50, Reindl Harald wrote:


cryptsetup/luks can achieve that way better


Only to a degree.


no - not only to a degree - when the question is "not store anything
unencrypted on the disk" the is no degree, but or if


Once the disk is unencrypted, you've got access to the
filesystem. If you've got physical access to the machine, then anything
which gives you console access gives you (potentially) access to the
underlying database files. If you can get those, it's trivial to get
access to the dataset that they contain.

However, if TDE is employed, then you've got another significant
obstacle to overcome: The data is only encrypted (aiui) once it's in
memory. At this point, you're needing to do attacks on RAM to get
access
to the data - and even then, you're unlikely to get 3 bars for a
jackpot
payout of the whole database schema, assuming a decent sized database.


in theory

in reality you don't need to hack around in the RAM - mysqld needs to
have access to key for operate with the data and so you need to find
only that piece

the same for encryption on the application side before send data to the
db-layer - see the start and subject of that thread how far people are
away from understanding how and on what layer things are encrypted and
what excatly is protected in which context

there is no "turn this on and you are safe" without deeper understanding



Correct. As long as the key and the lock are on the same machine,
there will be some way of opening that lock. It's just a matter of how
hard can you make it to find that key. No data is perfectly safe. No
crypto is unbreakable. Ever.

Maybe the key only exists in memory while the daemon runs? You can
hack the memory to find the key.

Maybe the key is retrieved from another key service daemon. If you
have the credentials to impersonate a valid retriever, you are in the
money.

The purpose of any encryption system is not to make it impossible to
read the data. It's purpose is to make it impractically hard for any
unauthorized parties to read it.

taking your last line and making and assumption or two, notion of double
encryption arises - will it work?




A system called "Triple DES" does exactly what you propose and appears 
to be in wide usage.

https://en.wikipedia.org/wiki/Triple_DES

The key to avoiding brute force attacks is not how many times you 
scramble the data, but how long your key is. In the early days of 
computers, keys were short because processing power was less. In today's 
world, you must use longer keys just to stay ahead of Moore's Law.


Quoting from
http://www.welivesecurity.com/2016/02/17/how-is-cryptography-incorporated-into-pos-terminals/

For example, DES with a 56-bit key (2^56 possible combinations) can be broken 
in less than a day, since average computers can perform a billion operations 
per second. However, the addition of more bits to the string will exponentially 
increase the time required to crack it.



Most SSL keys (for example, those used to encrypt the information 
exchanged when you visit "secure" web sites) should all have keys that 
are 2048 bits or longer. If they don't already, I'll bet they are 
upgrading their certificates soon.

http://news.netcraft.com/archives/2012/09/10/minimum-rsa-public-key-lengths-guidelines-or-rules.html


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread shawn l.green



On 2/29/2016 3:13 PM, Reindl Harald wrote:



Am 29.02.2016 um 20:54 schrieb Gary Smith:

On 29/02/2016 19:50, Reindl Harald wrote:


cryptsetup/luks can achieve that way better


Only to a degree.


no - not only to a degree - when the question is "not store anything
unencrypted on the disk" the is no degree, but or if


Once the disk is unencrypted, you've got access to the
filesystem. If you've got physical access to the machine, then anything
which gives you console access gives you (potentially) access to the
underlying database files. If you can get those, it's trivial to get
access to the dataset that they contain.

However, if TDE is employed, then you've got another significant
obstacle to overcome: The data is only encrypted (aiui) once it's in
memory. At this point, you're needing to do attacks on RAM to get access
to the data - and even then, you're unlikely to get 3 bars for a jackpot
payout of the whole database schema, assuming a decent sized database.


in theory

in reality you don't need to hack around in the RAM - mysqld needs to
have access to key for operate with the data and so you need to find
only that piece

the same for encryption on the application side before send data to the
db-layer - see the start and subject of that thread how far people are
away from understanding how and on what layer things are encrypted and
what excatly is protected in which context

there is no "turn this on and you are safe" without deeper understanding



Correct. As long as the key and the lock are on the same machine, there 
will be some way of opening that lock. It's just a matter of how hard 
can you make it to find that key. No data is perfectly safe. No crypto 
is unbreakable. Ever.


Maybe the key only exists in memory while the daemon runs? You can hack 
the memory to find the key.


Maybe the key is retrieved from another key service daemon. If you have 
the credentials to impersonate a valid retriever, you are in the money.


The purpose of any encryption system is not to make it impossible to 
read the data. It's purpose is to make it impractically hard for any 
unauthorized parties to read it.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread shawn l.green

Hi Reindl,

On 2/29/2016 2:16 PM, Reindl Harald wrote:



Am 29.02.2016 um 20:07 schrieb Jesper Wisborg Krogh:

Hi Lejeczek,

On 1/03/2016 00:31, lejeczek wrote:

hi everybody

a novice type of question - having a php + mysql, can one just encrypt
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new,
internal encryption?


Starting with MysQL 5.7.11, there is transparent data encryption (TDE)
for InnoDB tables. If you use that, it is as the name suggest
transparent for PHP. See also:
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html


i still don't grok a usecase for such encryption because when a
webserver got compromised you have the same access as before, just
solwer with more overhead in general

what is the purpose of encryption on that layer?




Some process requirements state that some data should never be stored on 
disk in plain text. This is one way to meet those requirements.


Some data has been compromised not by cracking the primary database but 
by breaking into a server containing backups of the data. This new 
feature allows file-level backups (like those generated by MySQL 
Enterprise Backup) to be secure.


What that feature achieves is that the data will be encrypted at rest, 
not just in flight (using SSL).


Clearly, this does not defeat an attacker who is able to compromise or 
become an authenticated client who is normally allowed to read that 
data. To fix that problem, you must employ application-level encryption 
which encodes the data actually stored on the table. Clearly this last 
type of encryption breaks the database server's ability to index the 
data as the server would have no key to decrypt the content of the 
fields to build any normal (clear-content) indexes on it. It would only 
be able to index the encrypted (opaque) data. The clients would need to 
code their queries with WHERE clauses looking for the exact encrypted 
values they wanted to find.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-29 Thread shawn l.green



On 1/28/2016 6:30 PM, Larry Martell wrote:

On Thu, Jan 28, 2016 at 5:45 PM, shawn l.green <shawn.l.gr...@oracle.com> wrote:



On 1/28/2016 3:32 PM, Larry Martell wrote:


On Thu, Jan 28, 2016 at 2:13 PM, shawn l.green <shawn.l.gr...@oracle.com>
wrote:




On 1/28/2016 1:14 PM, Larry Martell wrote:



On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor <h...@tbbs.net> wrote:



2016/01/25 19:16 ... Larry Martell:




SELECT IFNULL(f_tag_bottom,
IFNULL(f_tag_bottom_major_axis,
 IFNULL(f_tag_bottom_minor_axis,
  IFNULL(f_tag_ch_x_bottom,
   IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
   STDDEV(ch_x_top)





Of course, this isn't your real problem, but you could use COALESCE
instead
of all those IFNULLs (and you don't need the last one):
   SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
   STDDEV(ch_x_top)
   

As Johnny Withers points out, you may repeat the expression in the
WHERE-clause:
   WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) =
'E-CD7'
If really only one of those is not NULL, it is equivalent to this:
   'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)




Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
I didn't know I could use that in a where clause. This worked great
for the requirement I had, but of course, once that was implemented my
client changed the requirements. Now they want to know which of the 5
f_tag_* columns was matched. Not sure how I'll do that. Probably need
another query.


One option to consider is to add another column to the query with a CASE
similar to this...

SELECT
, ... original fields ...
, CASE
WHEN f_tag_bottom THEN 'f_tag_bottom'
WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
... repeat for the rest of the fields to test ...
ELSE 'none'
END as match_flag
FROM ...

Technically, the term in the WHERE clause should prevent a 'none' result
but
I put it there to help future-proof the code.



Won't that find the first one of the f_tags that is not null, but not
necessarily the one that was matched by the where clause?



I slightly cheated in my example.

My CASE...END was listing terms in the same order as the COALESCE() function
you were using in the WHERE clause. The cheat was that only a non-null value
could be TRUE. To be more accurate, I should have used
... WHEN f_tag_bottom IS NOT NULL THEN ...
That way you end up with a true boolean check within the CASE decision tree.

As the COALESCE() is testing its terms in the same sequence as the
CASE...END, there should be no difference between the two checks.  But, that
also adds to the maintenance cost of this query. If you should change the
order of the f_tag checks in the COALESCE() function, you would need to
change the CASE...END to the same sequence.


Yes, I see that, but does the case only look at the filtered rows? For
example, lets say there's this data:

row 1: f_tag_bottom = "ABC"
row 2: f_tag_bottom_major_axis = "XYZ"

and my where clause has this:

WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'XYZ'

won't the CASE pick up row 1? Whereas I want it to pick up row 2.



Yes it would.

Just modify the check in the case from "IS NOT NULL" to "='XYZ'" to make 
both functions (CASE and COALESCE) find the same field value in the same 
row at the same time.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-28 Thread shawn l.green



On 1/28/2016 1:14 PM, Larry Martell wrote:

On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor  wrote:

2016/01/25 19:16 ... Larry Martell:


SELECT IFNULL(f_tag_bottom,
  IFNULL(f_tag_bottom_major_axis,
   IFNULL(f_tag_bottom_minor_axis,
IFNULL(f_tag_ch_x_bottom,
 IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
 STDDEV(ch_x_top)



Of course, this isn't your real problem, but you could use COALESCE instead
of all those IFNULLs (and you don't need the last one):
 SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
 STDDEV(ch_x_top)
 

As Johnny Withers points out, you may repeat the expression in the
WHERE-clause:
 WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'E-CD7'
If really only one of those is not NULL, it is equivalent to this:
 'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)


Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
I didn't know I could use that in a where clause. This worked great
for the requirement I had, but of course, once that was implemented my
client changed the requirements. Now they want to know which of the 5
f_tag_* columns was matched. Not sure how I'll do that. Probably need
another query.

One option to consider is to add another column to the query with a CASE 
similar to this...


SELECT
, ... original fields ...
, CASE
  WHEN f_tag_bottom THEN 'f_tag_bottom'
  WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
  ... repeat for the rest of the fields to test ...
  ELSE 'none'
  END as match_flag
FROM ...

Technically, the term in the WHERE clause should prevent a 'none' result 
but I put it there to help future-proof the code.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: using alias in where clause

2016-01-28 Thread shawn l.green



On 1/28/2016 3:32 PM, Larry Martell wrote:

On Thu, Jan 28, 2016 at 2:13 PM, shawn l.green <shawn.l.gr...@oracle.com> wrote:



On 1/28/2016 1:14 PM, Larry Martell wrote:


On Tue, Jan 26, 2016 at 8:40 AM, Hal.sz S.ndor <h...@tbbs.net> wrote:


2016/01/25 19:16 ... Larry Martell:



SELECT IFNULL(f_tag_bottom,
   IFNULL(f_tag_bottom_major_axis,
IFNULL(f_tag_bottom_minor_axis,
 IFNULL(f_tag_ch_x_bottom,
  IFNULL(f_tag_ch_y_bottom, NULL) as ftag,
  STDDEV(ch_x_top)




Of course, this isn't your real problem, but you could use COALESCE
instead
of all those IFNULLs (and you don't need the last one):
  SELECT COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) as ftag,
  STDDEV(ch_x_top)
  

As Johnny Withers points out, you may repeat the expression in the
WHERE-clause:
  WHERE COALESCE(f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom) = 'E-CD7'
If really only one of those is not NULL, it is equivalent to this:
  'E-CD7' IN (f_tag_bottom, f_tag_bottom_major_axis,
f_tag_bottom_minor_axis, f_tag_ch_x_bottom, f_tag_ch_y_bottom)



Many thanks to Hal.sz and Johnny - I had forgotten about coalesce and
I didn't know I could use that in a where clause. This worked great
for the requirement I had, but of course, once that was implemented my
client changed the requirements. Now they want to know which of the 5
f_tag_* columns was matched. Not sure how I'll do that. Probably need
another query.


One option to consider is to add another column to the query with a CASE
similar to this...

SELECT
, ... original fields ...
, CASE
   WHEN f_tag_bottom THEN 'f_tag_bottom'
   WHEN f_tag_bottom_major_axis THEN 'f_tag_bottom_major_axis'
   ... repeat for the rest of the fields to test ...
   ELSE 'none'
   END as match_flag
FROM ...

Technically, the term in the WHERE clause should prevent a 'none' result but
I put it there to help future-proof the code.


Won't that find the first one of the f_tags that is not null, but not
necessarily the one that was matched by the where clause?



I slightly cheated in my example.

My CASE...END was listing terms in the same order as the COALESCE() 
function you were using in the WHERE clause. The cheat was that only a 
non-null value could be TRUE. To be more accurate, I should have used

   ... WHEN f_tag_bottom IS NOT NULL THEN ...
That way you end up with a true boolean check within the CASE decision 
tree.


As the COALESCE() is testing its terms in the same sequence as the 
CASE...END, there should be no difference between the two checks.  But, 
that also adds to the maintenance cost of this query. If you should 
change the order of the f_tag checks in the COALESCE() function, you 
would need to change the CASE...END to the same sequence.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql\innodb_table_stats.ibd. Cannot open tablespace

2016-01-22 Thread shawn l.green

Hello Neil,

On 1/22/2016 3:23 PM, Neil Tompkins wrote:

2016-01-22 19:43:48 2556 [ERROR] InnoDB: Attempted to open a previously
opened tablespace. Previous tablespace mysql/innodb_table_stats uses space
ID: 1 at filepath: .\mysql\innodb_table_stats.ibd. Cannot open tablespace
sportstrader/event which uses space ID: 1 at filepath:
.\sportstrader\event.ibd
InnoDB: Error: could not open single-table tablespace file
.\sportstrader\event.ibd


This is where you need to focus. Something has modified the tablespace 
ID in the event.ibd file (the table `sportstrader`.`events`) so that it 
has the same ID as one of the system tables (the one used to store 
persistent stats for the InnoDB storage engine).


Each tablespace must have its own unique ID value.

This could have been anything from a bad sector on disk, a stray write 
by some other program, an intentional file-system-level change by some 
nefarious person, a bad memory cell that held the ID value of the 
`event` table's first page that caused a wrong value to be written to 
disk when the table closed, a bad backup/restore cycle ... the list can 
go on practically forever.  The problem is, the ID value in the .ibd 
file now conflicts with that of another tablespace file.


The resolution is to delete that table then restore it from a known good 
copy (preferably one from a backup).  If necessary, use the content of 
the Binary log to recover changes made to the table since that backup 
was made.


If you don't have a good copy (or any Binary log content) then try this:

1) Make a copy of event.ibd then remove it from that folder
2) Restart mysqld, it will complain about the missing file into the 
Error log. This is expected.

3) Use the techniques here to DROP that table.
http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html
4) Create an empty copy of that table (correct schema, no data)
5) Use ALTER TABLE...DISCARD TABLESPACE then ALTER TABLE...IMPORT 
TABLESPACE to replace the empty .ibd file you have now with the one you 
backed up earlier.
(demonstrated at the bottom of that link I just provided for "orphaned" 
tablespaces)


Then, when you can, schedule a full hardware check to look for 
potentially failing memory or disk media. That's the most common cause 
for a fault like this.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Renaming a table that has a trigger, then attempting to drop that trigger results in 'table doesn't exist' error.

2016-01-18 Thread shawn l.green

Hello Michael,

On 1/6/2016 12:51 PM, Michael Vaughan wrote:

If you execute the script below, you will get the following error:
'Error Code: 1146. Table 'testschema.TestRenamed' doesn't exist"

delimiter //

CREATE TABLE Test(
   id  int not null primary key auto_increment,
   name varchar(255)
)//

CREATE TRIGGER TEST_TRIGGER
BEFORE INSERT ON Test
FOR EACH ROW
BEGIN
SET NEW.name = CONCAT(NEW.name, '_X');
END//

RENAME TABLE Test TO TestRenamed//

DROP TRIGGER Test.TEST_TRIGGER//



Are there any workarounds for this?



Thank you for also reporting this as a bug.

http://bugs.mysql.com/bug.php?id=79873

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread shawn l.green



On 12/9/2015 9:59 AM, Artem Kuchin wrote:

Hello!

I am actually using MariaDB, but they do not seem to have any public
discussion system
and i suppose that engine is the same basically, so, problems are
probably  the same.

Today i setup the server to show locks and notice this:

|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE
TABLE_SCHEMATABLE_NAME
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearches
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lock
spru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lock
spru

The threads are
ID: 270022 TIME:185 COMMAND:Query STATE:Waiting for table metadata lock
USER:spru DB:spru
LOCK TABLES searchsobjects WRITE, searches WRITE

ID: 268871 TIME:3 COMMAND:Query STATE:updating USER:spru DB:spru
DELETE FROM searchsobjects WHERE search_id IN (
3680622,3677720,3679348,3679347,3680621,3678106,3678105,3680597,3680596,3680595,3676915,3676914,3676913,36777

19,3677718,3677717,3677716,3676984,3677795,3677794,3677793,3677792,3677796,3677802,3677801,3677800,3677799,3677798,3677797,3680580,3676988,3677791,3680589,36

77790,3677789,3677788,3677787,3677786,3677785,3677784,3677783,3677782,3680575,3677781,3677780,369,368,367,366,365,364,363,362

,361,360,3677769,3677768,3677767,3677766,3677765,3677764,3680619,3680620,3682405,3677763,3677762,3677761,3677760,3677759,3677758,3680601,3677757,3680

627,3680628,3680576,3680577,3680625,3680626,3680624,3680623,3677754,3679280,3679279,3679278,3679277,3679276,3679867,3679890,3680588,3677753,3677064,3677752,3

677751,3677750,3677749,3679608,3679607,3679606,3679605,3680613 )


So, by thread id it seems like DELETE started first and the LOCK TABLES
was issued.
However, i do not understand how GLOBAL READ LOCK became involved in
this all? And both lock tables and delete requested global read lock.

All tables are myisam. MariaDB  is 10.0.22 (mysql 5.6 based as i
understand)

Artem



You gave the answer in your last statement: "All tables are myisam" .

The MyISAM storage engine is not transactional and it does not do 
row-level locking. All UPDATE and DELETE operations require a full table 
lock to perform and those must wait for all earlier readers or writers 
to exit the table before they can start.  INSERT operations are special 
as you can enable a mode to allow INSERTs to happen only at the end of 
the file and not be blocked while one of the other two operations are in 
progress.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread shawn l.green



On 12/9/2015 12:06 PM, Artem Kuchin wrote:

09.12.2015 19:35, shawn l.green пишет:


 INSERT operations are special as you can enable a mode to allow
INSERTs to happen only at the end of the file and not be blocked while
one of the other two operations are in progress.



Cannot find anything about that. Can you be a little more specific? It
is unrelated the my question, but would be great to have too.

Artem



It's been a long while since I had to think about the variable 
--concurrent-insert so I blurred the lines just a little. I apologize.



Changing the mode cannot allow concurrent UPDATE or DELETE with an 
INSERT. The UPDATE or DELETE will always ask for a full table lock. It 
only allows for concurrent SELECT and INSERT commands to happen at the 
same time to the same MyISAM table.


http://dev.mysql.com/doc/refman/5.6/en/concurrent-inserts.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread shawn l.green



On 12/9/2015 11:59 AM, Artem Kuchin wrote:

09.12.2015 19:35, shawn l.green пишет:



On 12/9/2015 9:59 AM, Artem Kuchin wrote:

Hello!
|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE
TABLE_SCHEMATABLE_NAME
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearches
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lock
spru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lock
spru




You gave the answer in your last statement: "All tables are myisam" .

The MyISAM storage engine is not transactional and it does not do
row-level locking. All UPDATE and DELETE operations require a full
table lock to perform and those must wait for all earlier readers or
writers to exit the table before they can start.  INSERT operations
are special as you can enable a mode to allow INSERTs to happen only
at the end of the file and not be blocked while one of the other two
operations are in progress.


TABLE LOCK is okay, i understand that. But i see GLOBAL READ LOCK - not
table lock, but GLOBAL.
As i understand that it means ALL TABLES IN ALL DATABASES. Why?

Artem



That is something the official MySQL does not do. You would need to 
research the MariaDB fork's documentation to see why they report it as a 
global lock.


I'm thinking that it might not be a full lock on all tables, just on the 
one, to prevent someone from changing the table's design before the 
queued UPDATE or DELETE could complete. We do that, too. We lock the 
definition while any writer is using the table. That is performed using 
a metadata lock (MDL).


I, personally, have not had any time at all to dig that deeply into how 
the forks differ from the original in terms of lock management. Maybe 
another person on this list will know?


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: --initialize specified but the data directory has files in it. Aborting.

2015-11-13 Thread shawn l.green

Hello Jim,

On 11/13/2015 11:12 AM, jim Zhou wrote:

Hi,

I did "yum install myswl-community-server" and "service mysqld start"
I got the error

Initializing MySQL database:  2015-11-13T15:54:01.203931Z 0 [Warning]
Changed limits: max_open_files: 1024 (requested 5000)
2015-11-13T15:54:01.204205Z 0 [Warning] Changed limits: table_open_cache:
431 (requested 2000)
2015-11-13T15:54:01.204397Z 0 [Warning] TIMESTAMP with implicit DEFAULT
value is deprecated. Please use --explicit_defaults_for_timestamp server
option (see documentation for more details).
2015-11-13T15:54:01.207712Z 0 [ERROR] --initialize specified but the data
directory has files in it. Aborting.
2015-11-13T15:54:01.207751Z 0 [ERROR] Aborting

can someone help?

thank you,
Jim



You attempted to install a new 5.7 on top of an existing set of data.

Quoting from
http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_initialize

This option is used to initialize a MySQL installation by creating the data 
directory and populating the tables in the mysql system database.


That is why it told you that the folder it was attempting to use was not 
empty then stopped (aborted). It wasn't going to overwrite your existing 
system tables with the ones it needs to use.


You could have used YUM to install an RPM or used a native YUM repo (you 
did not say). Did you also read this in the manual?

http://dev.mysql.com/doc/refman/5.7/en/updating-yum-repo.html

By default, the MySQL Yum repository updates MySQL to the latest version in the 
release series you have chosen during installation (see Selecting a Release 
Series for details), which means, for example, a 5.6.x installation will NOT be 
updated to a 5.7.x release automatically.


There is a link in that page that points you to this other very 
important page:

http://dev.mysql.com/doc/refman/5.7/en/upgrading-from-previous-series.html

Does this help?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help...

2015-10-22 Thread shawn l.green



On 10/22/2015 11:48 AM, Don Wieland wrote:



On Oct 20, 2015, at 1:24 PM, shawn l.green <shawn.l.gr...@oracle.com> wrote:

Which release of MySQL are you using?


Version 5.5.45-cll


How many rows do you get if you remove the GROUP_CONCAT operator? We don't need 
to see the results. (sometimes it is a good idea to look at the raw, 
unprocessed results)

Is it possible that you are attempting to concat more values than allowed by 
--group-concat-max-len ?


When I did this I realized I was missing a GROUP BY clause

Her is the debugged working version. Thanks guys.

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id
WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2015-01-01 00:00:00" AND "2015-12-31 
23:59:59"
GROUP BY ht.`transaction_id`
ORDER BY ht.tr_date DESC, ht.rec_code ASC;

Don Wieland
d...@pointmade.net
http://www.pointmade.net
https://www.facebook.com/pointmade.band




Thank you for sharing your solution.

Best wishes,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Help...

2015-10-20 Thread shawn l.green



On 10/20/2015 1:54 PM, Don Wieland wrote:

Hi all,

Trying to get a query working:

SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS 
alloc

FROM hiv_transactions ht

LEFT JOIN tk_orphans o ON ht.orphan_id = o.orphan_id
LEFT JOIN hiv_trans_items hti ON ht.transaction_id = hti.hiv_transaction_id

WHERE ht.donor_id = 730 AND ht.tr_date BETWEEN "2014-01-01 00:00:00" AND "2014-12-31 
23:59:59"
ORDER BY ht.tr_date DESC, ht.rec_code ASC;



I am only showing one row of the “hiv_transactions” table when there are 
multiple rows.

On the GROUP_CONCAT I am trying to get a comma delineated list of the child 
rec_code with no duplicates

Appreciate any help. Hopefully a small mod ;-)


Don Wieland



Which release of MySQL are you using?

How many rows do you get if you remove the GROUP_CONCAT operator? We 
don't need to see the results. (sometimes it is a good idea to look at 
the raw, unprocessed results)


Is it possible that you are attempting to concat more values than 
allowed by --group-concat-max-len ?


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query optimizer-miss with unqualified expressions, bug or feature?

2015-10-19 Thread shawn l.green



On 10/19/2015 3:48 PM, Roy Lyseng wrote:

Hi Ben,

On 19.10.15 16.07, Ben Clewett wrote:

Hi Roy,

Thanks for the clear explanation.

I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the
result?
Which might make a significant result to something?


That is correct. However, if the substitution type for BOOLEAN was
UNSIGNED TINYINT instead of TINYINT, the range 'a < 0' would not even be
needed, and the quite cumbersome UNION would be avoided. But the best
solution would of course be a two-valued boolean type, where 'a <> 0'
would easily be transformed to a = 1. It would also mean that statistics
for the columns would be better, with TINYINT each value has the
estimated probability 1/256, whereas a boolean value would have
probability 1/2.




256 possible values of BOOLEAN? I've heard of fuzzy logic but this is 
awesome! Some new literal value names to consider: maybe, sort_of, 
nearly_always, certainly, practically_never, likely, ...

*


On a more serious note, indexes with limited cardinality are less useful 
than those with excellent cardinality. Cardinality is an approximation 
(or calculation. It depends on your storage engine) of how many unique 
values there are in the index.


If the Optimizer estimates (based on a calculation based on the 
Cardinality) that more than about 30% of a table would need to be 
retrieved in random order based on an index, then that index is 
disallowed.  Why? Because the physical disk overhead of doing random 
access averages just slightly more than 3x the overhead used to scan a 
much larger block of data.


http://dev.mysql.com/doc/refman/5.6/en/how-to-avoid-table-scan.html

An index on just a Boolean value would have at best a cardinality of 2. 
So, any indexes on Boolean values should include other columns to help 
the index become more selective.


http://dev.mysql.com/doc/refman/5.6/en/multiple-column-indexes.html


*Actually, fuzzy logic has lots of practical application in real world 
situations. They are just not using the MySQL BOOLEAN data type to store 
the value for comparison.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: When to create a new database

2015-10-12 Thread shawn l.green



On 10/10/2015 10:28 AM, Richard Reina wrote:

If I were keeping tract of high school sports statistics and thus designed
the following tables:

sports, rules, statistical definitions

and

players, teams, games

Would it be a good or bad idea to put the first set of tables in a separate
database called "library" since they are used for reference and rarely
change? What would be the pros and cons of using two different databases?

Thanks



The general rule is: are the tables all closely related (as in used for 
the same business purpose)?  If they are, and possibly interdependent, 
then they normally belong in the same database.


However if some of of them are a derivatives of the others the it may 
make logical sense for the derivative tables to reside in their own 
database.


example: one database may be your "raw" data: every play, every 
statistic.  The other database may be your "summary" data: the 
meta-statistics you get by combining or summarizing the raw data. 
Querying your already-summarized data will be much faster than trying to 
query your raw data for summaries every time you need them.


You may want to create the same set of tables in separate databases 
organized by sport. One DB for baseball, one for football, one for 
basketball, etc. That would make it easier for you to move just one 
shard of your entire data set to a new bigger server if the need arises. 
The problem with that design is that if you wanted to see a complete 
report for each player, then you have to query as many separate tables 
as you have sports (because each part of that player's history would be 
in a separate database).



If your MySQL instance is going to be acting as the back end to a web 
application, then you would probably want to split the tables into 
databases based on their function in your program: one database for your 
program's settings (users/accounts/access control, user options, user 
preferences,...) and a different database just for the statistical data.



A "database" is just a logically grouped set of tables. What is meant by 
"logic" in that previous sentence varies widely between each situation.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Relational query question

2015-09-29 Thread shawn l.green



On 9/29/2015 1:27 PM, Ron Piggott wrote:



On 29/09/15 13:01, Richard Reina wrote:

If I have three simple tables:

mysql> select * from customer;
+++
| ID | NAME   |
+++
|  1 | Joey   |
|  2 | Mike   |
|  3 | Kellie |
+++
3 rows in set (0.00 sec)

mysql> select * from fruit;
++-+
| ID | NAME|
++-+
|  1 | Apples  |
|  2 | Grapes  |
|  3 | Oranges |
|  4 | Kiwis   |
++-+
4 rows in set (0.00 sec)

mysql> select * from purchases;
++-+--+
| ID | CUST_ID | FRUIT_ID |
++-+--+
|  2 |  3 |   2   |
|  3 |  1 |   4   |
|  4 |  1 |   2   |
|  5 |  2 |   1   |
++-+--+

I am having trouble understanding a relational query. How can I select
those fruits that Joey has not purchased?



I think you are going to want to use a "LEFT JOIN" using "purchases" as
the common table to join with a WHERE purchases.FRUIT_ID IS NULL





SELECT f.name
FROM fruit f
LEFT JOIN purchases p
  on f.id = p.fruit_id
INNER JOIN customer c
  on p.cust_id = c.id
  and c.name='Joey'
WHERE c.id IS NULL;

You have to make that "and...Joey" part of the LEFT JOIN to be selective 
for just "what Joey bought". it is the WHERE c.id IS NULL part that 
filters out and returns only the stuff that Joey did not buy.


If you put the c.name='Joey' term in the WHERE clause then you force a 
value to exist at that point of the query turning your LEFT JOIN into 
INNER JOIN (which would only show you what Joey did buy).


If you put WHERE c.name !='Joey' into the WHERE clause, then you would 
get the list of fruits that anyone else but Joey had purchased.


To see how this works and to understand the process a little better, 
expose all 3 layers of the problem as a big matrix (you'll get all 48 
row combinations).


SELECT f.id as f_id, f.name, p.id as p_id, p.cust_id, p.fruit_id, c.id 
as c_id,  c.name

FROM fruit f
LEFT JOIN purchases p
  on f.id = p.fruit_id
LEFT JOIN customer c
  on p.cust_id = c.id


From here, look at when the columns are NULL and when they aren't. Then 
experiment with different conditions. You are almost there. This should 
push you right to the top of the learning curve.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: ENUM() vs TINYINT

2015-09-22 Thread shawn l.green



On 9/21/2015 9:03 AM, Richard Reina wrote:

I have a column name quarter which I need to have 5 possible inputs; 1, 2,
3, 4, or OT. Because of the OT possibility I am leaning towards ENUM.
Hence, I am also thus considering ENUM('first', 'second', 'third',
'fourth', 'overtime') as the input will primarily be used in written
descriptions. Is this a wise or unwise way to design a table column?

Thanks



Depending on the specific sport (and level of competition), there may be 
more than one OT period. Do you really want to aggregate all of the OT 
stats into just one bucket?


It makes better sense to me to use a TINYINT for storage then for any 
values >=5 convert to "OT", "OT2", ...  unless it makes no difference 
for your purposes which period of extra play you might be in.


This would also allow you to easily query your stats for any rows where 
`quarter`>4 to see which games, if any, experienced any OT play at all.


You could do the same with ENUMS but then you would need a longer list 
of ENUMS (OT, OT2, OT3, ... ) to cover all the possibilities.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need a little admin help

2015-09-01 Thread shawn l.green

Hello Steve,

On 8/27/2015 9:11 PM, Steve Matzura wrote:

I have a Wordpress user who is setting up a Website and says he can't
connect to his database. Both I and the Wordpress admin are new to
this, so I've probably done something wron when I set him up
initiallyg.

Once I connected to SQL as the SQL admin, I used the following
commands to set up the new user's access. In the commands below, I
used my name as for user and database names:

CREATE DATABASE steve_db;
GRANT ALL PRIVILEGES ON steve_db.* TO "steve"@"localhost"
 -> IDENTIFIED BY "steve_pw";
FLUSH PRIVILEGES;

All commands worked successfully.

To figure out what I did wrong, I need to know how to list all SQL
users and what databases they have access to, and if I discover I've
connected a user to a wrong database, how to correct this--do I delete
the user and database and start it all over, or is it easier to modify
wrong things than to replace them? Whatever I do, including deleting
everything, is OK, since the only things I'm doing with SQL at this
time have to do with Postfix, and I certainly know enough not to touch
those.

As always, thanks in advance.



Unless that user is going to terminal into that host server before 
trying to start their MySQL session, the account you created is not 
going to work.  The host pattern "@localhost" only authenticates users 
that are appear as if they are connecting from within the host machine, 
itself.  If this other user is attempting to connect to his database 
from some other location, you will need a different host pattern to 
allow that user to authenticate.


To see what privileges an account has you would use the "SHOW GRANTS FOR 
..." command. Any account can issue a SHOW GRANTS command (without any 
user name or FOR keyword) to see their own privileges.



Additional reading:
http://dev.mysql.com/doc/refman/5.6/en/account-names.html
http://dev.mysql.com/doc/refman/5.6/en/show-grants.html


Does that give you the details you need to create a second account with 
the appropriate host pattern?


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: pid-file quite

2015-08-18 Thread shawn l.green

Hello Martin,

Sorry about the delay. My normal support duties don't allow as much time 
as I like to spend on community issues like this.


On 7/31/2015 10:41 AM, Martin Mueller wrote:

Dear Mr. Green,

first I'd like to thank you for your very clear explanations, which
helped. 'mysql' is an overdetermined word with all the advantages and
disadvantages of that.

While finally getting into the door, I ran into another problem: pid-file
quit without updating.  This seems to be a fairly common phenomenon, to
judge from offered help on the Web. But the explanations are all over the
map, and the help is of dubious value. I've run into this problem several
times. One piece of advice was to use ps ax|grep mysql and then kill the
processes with the number returned by the query. That worked on one
occasion, but on another occasion it didn't. On that occasion, though, if
I logged in as superuser and started the server it worked.

There doesn't seem to be anything about this problem in the mysql
documentation. I not that it seems to be a fairly common kind of error,
with no clearly diagnosis or therapy from a source that can speak with
much authority.

It may be Mac specific and has to do with Startup items that you're not
supposed to use anymore and launcher daemons that are not easily
understood by poor mortals by me. But OS X is a very popular operating
system and MySQL is a very popular database. So I don't quite understand
why very basic installation and operating procedures are so complicated.
... snipped ...


The error is coming from mysqld_safe. What it is telling you is that the 
last time that mysqld stopped operating, it did not clean up its 
previous pid file (process identifier).  Why it did not do that can have 
many many reasons. That is why there is no clear or simple answer that 
fits all situations.


You have to examine the MySQL Error log to find any errors that are 
causing the abnormal shutdown then correct those. You may need to start 
and stop the daemon manually (by executing mysqld directly in a shell 
session, not via the services or mysqld_safe scripts) at least once to 
ensure that you have the problem corrected. After that, and a normal 
shutdown, you should be able to resume starting the database daemon 
using the angel script mysqld_safe again.



Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications  Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: password problem

2015-07-31 Thread shawn l.green



On 7/31/2015 8:40 AM, Martin Mueller wrote:

Sorry  for the off-list reply. It was an oversight.

That said, the instructions for resetting a forgotten root password have a
section for Windows and a section for Unix. The Unix section begins as
follows:


1. Log on to your system as the Unix user that the MySQL server runs as
(for example, mysql).



Everything that executes on a Linux/Unix/Mac machine executes in the 
context of some kind of user account (the system login). By default, 
mysqld (the database server daemon) is installed to run under the host 
machine user account 'mysql'. It can be changed if you want to change it 
but that is the default. That is why 'mysql' was listed in the for 
example section of that instruction.




But if I do this with the command 'mysql -u mysql I get the answer



No. That is how you log into mysqld to open a MySQL client session. The 
instruction was to login to your operating system as the user that 
mysqld operates as.  These are fundamentally different accounts at two 
very different levels.





Access denied for user 'mysql'@'localhost' (using password: NO)

I can do this as super user or normal, and I can try passwords from
earlier installations, but none of them work. So I am stopped dead in my
tracks, am I not?



That is because you didn't add this line to the [mysqld] section of your 
configuration file before you started mysqld.


skip-grant-tables

If you had, you would not have needed to use any passwords at all. This 
command (on the system prompt) would be all you need to connect to your 
now completely-unlocked database server (see the third section of 
generic instructions that work on any platform).


mysql



As for the datadir, the command update db locate mysql works on the Mac
and gives me info about a whole set of files in
/usr/local/mysql-5.1.73-osx10.6-x86_64. That's where I thought it was, and
I deleted a previous installation because I had moved the data I needed to
another machine.

I'm not a very experienced programmer and have trouble wrestling with the
command line. But I think I did my due diligence and didn't find any open
doors.



The door is there, you just just need to be able to see it as a door. 
Just a little more experience working on the command line will help.

... remainder snipped ...

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications  Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: table design question

2015-07-29 Thread shawn l.green

Hi Richard,

On 7/29/2015 10:19 AM, Richard Reina wrote:

If I were to create a database table(s) to tract most common repairs to
different appliances I can't decide if it would be better to create one
table with a long ENUM column that contains repairs that could be
attributed to any appliance or different repair tables for each appliance.
All the tables would describe the same thing -- a repair -- however the the
things being repaired are different in nature which means a great deal of
types of repairs that do not relate. Here is an example.

repair_wash_mach
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'leak', 'motor_coupler',
'pump', 'controls', 'agitator')

repair_dish_washer
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'drain_arm', 'drive_belt',
'door_latch', 'spray_arm', 'drain_valve')

repair_refridgerator
ID INT, Date DATE, Tech_ID INT,  Type ENUM( 'circ_fan', 'compressor',
'disps_line', 'drain_pan', 'feeler_arm')

Or since they are all repairs should they be in one table with a REALLY
long ENUM table -- that will need to me altered as the number of appliances
will most likely increase?

ID INT, APPLIANCE VARCHAR(35), Date DATE, Tech_ID INT,  Type ENUM( 'leak',
'motor_coupler', 'pump', 'controls', 'agitator', 'drain_arm', 'drive_belt',
'door_latch', 'spray_arm', 'drain_valve', 'circ_fan', 'compressor',
'disps_line', 'drain_pan', 'feeler_arm')



I would suggest a table of appliances, a table of components, and a 
table of repairs something like this...


repair_tasks(
  task_id int auto_increment
, task_description varchar(25)
, appliance_id int not null
, component_id int not null
)

That way you can have two tasks for the same device. For example,

A task of attach door seal would associate the fields (refrigerator, 
door seal).  So would replace door seal. So would order door seal 
from warehouse.


I would not use ENUMS, you would run out of options too quickly. My 
examples are extremely simplified but hopefully you can see the storage 
pattern I am suggesting.


Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications  Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL 5.5 Slow performance to insert

2015-07-27 Thread shawn l.green



On 7/24/2015 4:35 PM, Camilo Vieira wrote:

Hi,

My MySQL server is performing very slow inserts. Does somebody could help
me to understand what's happening?
... snip ...
---TRANSACTION 31D6D74, ACTIVE 27107 sec
mysql tables in use 8, locked 8
7470 lock struct(s), heap size 801208, 849088 row lock(s)
MySQL thread id 15361, OS thread handle 0x7fea5e5c2700, query id 2690080
10.180.17.252 root Copying to tmp table
insert into CONFERENCIA_ENCALHE
(data, preco_capa_informado, qtde, qtde_informada,
chamada_encalhe_cota_id, controle_conferencia_encalhe_cota_id,
movimento_estoque_id, movimento_estoque_cota_id,
produto_edicao_id,juramentada)
(select distinct data_recolhimento, pe.preco_previsto, mec.qtde, mec.qtde,
cec.id, ccec.id, me.id, mec.id, mec.produto_edicao_id,0
from
movimento_estoque_cota_memoria mec,
movimento_estoque_memoria me,
chamada_encalhe ce,
chamada_encalhe_cota cec,
controle_conferencia_encalhe cce,
controle_conferencia_encalhe_cota ccec,
produto_edicao pe
where
mec.tipo_movimento_id =

...snip...


--
BUFFER POOL AND MEMORY
--
Total memory allocated 2197815296; in additional pool allocated 0
Dictionary memory allocated 28294038
Buffer pool size   131071
Free buffers   0
Database pages 123957
Old database pages 45737



Here are the two things I noticed:

1) You are using a subquery in this INSERT command instead of a naked 
SELECT. How long does it take that query to execute in isolation 
(outside of an INSERT command) ?


The correct syntax would be to skip the parentheses around the SELECT 
portion of the command. Instead of this,


insert into CONFERENCIA_ENCALHE(...) (select distinct data_recolhimento, 
pe.preco_previsto, mec.qtde, ...


, do this,

insert into CONFERENCIA_ENCALHE(...) SELECT distinct data_recolhimento, 
pe.preco_previsto, mec.qtde, ...



2) You have allocated very little memory to your InnoDB Buffer Pool. 
131071 pages = 2GB.  Depending on how much data you are attempting to 
first SELECT from seven tables then INSERT into the other, you may be 
forcing the system to do a lot of disk-level intermediate storage which 
is much slower than buffering that same information in memory. However, 
that may be all the RAM you can spare for MySQL. If that is so, then 
there is little you can do about this particular part of the problem.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications  Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Post-installation config and startup questions

2015-06-23 Thread shawn l.green



On 6/23/2015 10:59 AM, Steve Matzura wrote:

I'm building a mail system with Dovecot, Postfix and Mailman with
MySQL. I have the other three products installed but not yet
configured. I'll do that after I get MySQL running.

The documentation at
http://dev.mysql.com/doc/mysql-repo-excerpt/5.6/en/linux-installation-yum-repo.html
was instrumental in remedying some errors I made along the way by
following an installation procedure for Ubuntu instead of for Fedora.
The installation completed successfully.



These should be the same instructions as we post here:
http://dev.mysql.com/doc/refman/5.6/en/linux-installation-yum-repo.html

But perhaps you needed to follow these instead?
http://dev.mysql.com/doc/refman/5.6/en/replace-third-party-yum.html



The next step in the instructions I originally read said the next
thing to do was run mysql_install_db. It started out well:
... snip ...

And then things fell apart.

2015-06-23 11:54:52 23737 [ERROR] Incorrect definition of table
mysql.proc: expected column 'sql_mode' at position 14 to have type
set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERRO
2015-06-23 11:54:52 23737 [ERROR] Incorrect definition of table
mysql.event: expected column 'sql_mode' at position 14 to have type
set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH'),
found type
set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','IGNORE_BAD_TABLE_OPTIONS','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_A
ERROR: 1136  Column count doesn't match value count at row 1
2015-06-23 11:54:53 23737 [ERROR] Aborting
... snip ...

Since this is a new installation, I decided to take the advice of the
automated instructions:

# mysqld_safe --defaults-file
150623 11:58:52 mysqld_safe Logging to '/var/log/mysqld.log'.
150623 11:58:52 mysqld_safe Starting mysqld daemon with databases from
/var/lib/mysql
150623 11:58:53 mysqld_safe mysqld from pid file
/var/run/mysqld/mysqld.pid ended

Unless I'm an absolute dolt, this looked successful. Since this was my
first time running MySQL, I tried the next step as if it were a new
install, which it's supposed to be:

# /usr/bin/mysql_secure_installation



Unless you can tail the end of the MySQL error log and see that it is 
ready for connections, then you may not have started MySQL and you 
would not be ready for mysql_secure_installation.






NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
   SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MySQL to secure it, we'll need the current
password for the root user.  If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
ERROR 2002 (HY000): Can't connect to local MySQL server through socket
'/var/lib/mysql/mysql.sock' (2)
Enter current password for root (enter for none): ^C
Aborting!

Cleaning up...
Cleaning up...
Warning: Could not unlink .my.cnf.24101: No such file or directory
Warning: Could not unlink .mysql.24101: No such file or directory

And that's where things are.

Any help greatly appreciated.



@Andy - a.sm...@ukgrid.net -

you haven't said what OS you are installing this on or how you have installed 
MySQL.


Yes, he did but not directly and not completely. He is installing on 
some version of Fedora using a YUM repository.



@Steve

You might need to revert the situation to the state it was in just 
before you run mysql_install_db then try again.  That situation would be

* The binaries are installed
* The --datadir location (which defaults to /var/lib/mysql/data on most 
Linux flavors) is empty.


But, before you attempt to return to that state:
1) Is your mysqld running?  If so, what errors were recorded to the log 
in your last startup (not messages, errors).


2) Is the running MySQL the package you were trying to install? Give us 
the 

Re: Refresh slave state

2015-06-18 Thread shawn l.green



On 6/18/2015 2:10 PM, Ben RUBSON wrote:

Hello,

In order for the slave to quickly show a communication issue between
the master and the slave, I set slave_net_timeout to 10.
show slave status then quickly updates, perfect.

I would also like the master to quickly show when the slave is no more
reachable.

However, show processlist and show slave hosts take a very long
time to update their status when the slave has gone.
Is there any way to have a refresh rate of about 10 seconds, as I did
on slave side ?

Thank you !

Ben



There are two situations to consider

1) The slave is busy re-trying.  It will do this a number of times then 
eventually disconnect itself. If it does disconnect itself, the 
processlist report will show it as soon as that happens.


2) The connection between the master and slave died (or the slave itself 
is lost).  In this case, the server did not receive any I am going to 
disconnect message from its client (the slave). So as far as the server 
is concerned, it is simply sitting in a wait expecting the client to 
eventually send in a new command packet.


That wait is controlled by --wait-timeout.  Once an idle client 
connection hits that limit, the server is programmed to think the idiot 
on the other end of this call has hung up on me so it simply closes its 
end of the socket. There are actually two different timers that could be 
used, --wait-timeout or --interactive-timeout and which one is used to 
monitor the idle socket depends entirely on if the client did or did not 
set the 'interactive flag' when it formed the connection. MySQL slaves 
do not use that flag.


Now, if the line between the two systems died in the middle of a 
conversation (an actual data transfer) then a shorter -net-write-timeout 
or --net-read-timeout would expire and the session would die then. But I 
think you were not observing one of those failures.


Does that help?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Why does a group_concat on a join change aggregate values?

2015-05-07 Thread shawn l.green

Hi Paul,
On 5/7/2015 10:17 AM, Paul Halliday wrote:

Fighting a bit with this one...

If I do something like (pseudo):

SELECT count(val) AS n, GROUP_CONCAT(types) AS c_types FROM tbl1

returns something like:

n  c_types
1  t9

when I add a left join though:

SELECT count(val) AS n, GROUP_CONCAT(types) AS c_types,
GROUP_CONCAT(two.types) AS d_types FROM tbl1 LEFT JOIN tbl2 AS two ON
tbl1.id = tbl2.id

returns something like:

val  c_types  d_types
3t9,t9,t9 a2,a3,a9

I can have as many group_concats against the same table with varying
results and they don't affect COUNT() but once I do that JOIN things start
to fall apart.

What is happening behind the scenes?

Thanks!

Here's a simple test. Change the query to no longer have the aggregate 
functions, then start counting rows by hand.


SELECT
  val AS n
, types AS c_types
, two.types AS d_types
FROM tbl1
LEFT JOIN tbl2 AS two
  ON tbl1.id = tbl2.id

The other thing that springs to mind is that you lack a GROUP BY in your 
query. It isn't required but they can often help get you to the correct 
answer.


Best regards,

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: store search result as new table in memory

2015-04-07 Thread shawn l.green



On 4/7/2015 4:12 PM, Rajeev Prasad wrote:

hello Masters,
I am a novice, and I am wanting to know how to achieve this:
1million plus row in a table.
user runs a search, gets some results. I want to store this result in memory in 
a way, so that user can fire more SQL searches on this result. How is this 
done? I want this to go atleast upto 20 levels down.
in addition, lets say when I am 4th level down, can I have the previous levels 
intact for making fresh searches on them?
I also want to store some queries, which produce level X result, in a manner 
that it speeds the process in future (user do not have to make multiple 
searches to get to the result)

initial Table||---1st search run on initial table (level 1)  |  
|-2nd search run on previously obtained result rows (level 2)

any help is highly appreciated.
thank you.

Temporary tables are going to become your very good friends. They will 
be how you store your results for later reuse. You can pick from any 
available storage engines to that instance. If your levels are going 
to have a lot of data in them, then you can exhaust your heap if you 
store them all using the MEMORY storage engine. For those, you will want 
to use InnoDB or MyISAM.


The advantage to using temporary tables is that they can have indexes on 
them. You can create the indexes when you create the table or you can 
ALTER the table later to add them.


CREATE TEMPORARY TABLE Level1(key(a)) ENGINE=INNODB SELECT 
a,b,c,d...FROM source_data;


CREATE TEMPORARY TABLE Level2 ENGINE=MEMORY SELECT ... FROM Level1
ALTER TABLE Level2 ADD KEY(d,c);

If you don't want the column names and data types determined for you by 
the results of the SELECT, you can create define the columns explicitly 
then populate the table using INSERT...SELECT... instead.


CREATE TEMPORARY TABLE name_goes_here (
  a int
, b varchar(50
, c datetime
...
) ENGINE=...   (pick which engine you want to use or let it chose the 
default for that database by not using any ENGINE= as part of the 
definition)


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

You or someone you know could be a presenter at Oracle Open World! The 
call for proposals is open until April 29.

https://www.oracle.com/openworld/call-for-proposals.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: longtext fields in a row

2015-04-01 Thread shawn l.green

Hi Andrew,

On 4/1/2015 1:58 PM, Andrew Wallace wrote:

I thought that TEXT fields only stored a pointer to the actual data in
the table,
not the data itself - storing 9 to 12 bytes in the table:

|BLOB| https://dev.mysql.com/doc/refman/5.0/en/blob.htmland|TEXT|
https://dev.mysql.com/doc/refman/5.0/en/blob.htmlcolumns count from
one to four plus eight bytes each toward the row-size limit because
their contents are stored separately from the rest of the row.
   -- https://dev.mysql.com/doc/refman/5.0/en/column-count-limit.html

also: https://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html

That said, I would think a better structure would be to have the data
stored in a different
table, keyed to a table containing the date and integer fields...



Well, that also depends on which row_format he has told his InnoDB 
engine to use for that table.  If he uses DYNAMIC then the BLOB/TEXT 
pointer is only 20 bytes and none of it is stored in the base row. 
Using the older settings the first 254 characters are actually part of 
the row to save the extra jump to the off-page storage for the remainder 
of the BLOB/TEXT

http://dev.mysql.com/doc/refman/5.6/en/innodb-row-format-dynamic.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

You or someone you know could be a presenter at Oracle Open World! The 
call for proposals is open until April 29.

https://www.oracle.com/openworld/call-for-proposals.html





On 4/1/15 10:35 AM, Andrew Mueller wrote:

There is a max row size of 65,535 bytes.

There is no real way to get around this limit other than placing the HTML
code somewhere else, perhaps in a different table.

On Wednesday, April 1, 2015, Trianon 33 triano...@gmail.com wrote:


Hello,

I'm fiddling wit a table where I put in a date field (datetime, also
key)
and some integer fields (8 of them mostly 14 long) and some longtext
fields
(16 of them).

The longtext fields are filled with some statistics I generate complete
with HTML around, something like this: td12.925.965/td but than
bigger,
but mostly smaller than 1 Mb.

This row is initially created by filling the first 10 fields (datetime,
the integer and the 1st longtext) and than updated each and every
time the
next longtext value available is.

However this is ok up to the 10th longtext field and than it stops. The
next longtext operations runs ok, no errormessages etc. but the longtext
field itself remains empty.

Up to now I have no clue about my wrongdoings, so do you have any
suggestions?

Is there a max of longtext fields in 1 row?
Is there a max of longtext size in 1 row?

Other idea's?

Thanks in advance for any advice, best regards, Schimanski.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: duplicate rows in spite of multi-column unique constraint

2015-03-24 Thread shawn l.green

Hi Chris,

On 3/24/2015 10:07 AM, Chris Hornung wrote:

Thanks for the suggestions regarding non-printing characters, definitely
makes sense as a likely culprit!

However, the data really does seem to be identical in this case:

mysql select id, customer_id, concat('-', group_id, '-') from
app_customergroupmembership  where customer_id ='ajEiQA';
+-+-++
| id  | customer_id | concat('-', group_id, '-') |
+-+-++
| 20279608258 | ajEiQA  | -ddH6Ev-   |
| 20279608269 | ajEiQA  | -ddH6Ev-   |
+-+-++
2 rows in set (0.00 sec)


I also ran the data through hexdump as a secondary check, also looks
identical:

  mysql  --defaults-extra-file=~/.customers_mysql.cnf app -s -e select
id, customer_id, group_id from app_customergroupmembership  where
customer_id ='ajEiQA';  | hexdump -c
000   2   0   2   7   9   6   0   8   2   5   8  \t   a   j   E   i
010   Q   A  \t   d   d   H   6   E   v  \n   2   0   2   7   9   6
020   0   8   2   6   9  \t   a   j   E   i   Q   A  \t   d   d   H
030   6   E   v  \n


Any other suggestions given this info?




This reminded me of something so I went digging.  Turns out to be a bug 
introduced by a fix applied to a different bug in 5.6.12. We hate 
creating regressions but they do sometimes happen.


http://bugs.mysql.com/bug.php?id=73170

The fix was published in 5.5.40, 5.6.21, and 5.7.5.  You will need to 
upgrade to that release (or any later release) to avoid this happening 
in the future.


For now, manually resolve the duplication by deciding which id value you 
want to keep and discard the other copy of the row.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hi Larry,

On 2/1/2015 4:49 PM, Larry Martell wrote:

I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
MIN(data_cst.date_time) start,
MAX(data_cst.date_time) end,
MIN(data_target.name) as target,
MIN(data_lot.name) as lot,
MIN(data_wafer.name) as wafer,
MIN(measname) as measname,
MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
   data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
MIN(data_cst.date_time) start,
MAX(data_cst.date_time) end,
MIN(data_target.name) as target,
MIN(data_lot.name) as lot,
MIN(data_wafer.name) as wafer,
MIN(measname) as measname,
MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id = data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
   data_recipe.id IN (148) AND
   data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26 
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...


Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE 
term against the data_recipe table.


Compare the two EXPLAINS, in the faster query you see that data_recipe 
is listed second. This allows the additional term a chance to reduce the 
number of row combinations for the entire query.


To really get at the logic behind how the Optimizer chooses its 
execution plan, get an optimizer trace. Look at the cost estimates for 
each phase being considered.

http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hi Larry,

On 2/4/2015 3:18 PM, Larry Martell wrote:

On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

Hi Larry,


On 2/1/2015 4:49 PM, Larry Martell wrote:


I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
 MIN(data_cst.date_time) start,
 MAX(data_cst.date_time) end,
 MIN(data_target.name) as target,
 MIN(data_lot.name) as lot,
 MIN(data_wafer.name) as wafer,
 MIN(measname) as measname,
 MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
data_recipe.id IN (148) AND
data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...



Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE term
against the data_recipe table.

Compare the two EXPLAINS, in the faster query you see that data_recipe is
listed second. This allows the additional term a chance to reduce the number
of row combinations for the entire query.

To really get at the logic behind how the Optimizer chooses its execution
plan, get an optimizer trace. Look at the cost estimates for each phase
being considered.
http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html


Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)



Did you also add an index to the temporary table for the JOIN condition? 
It might make it even faster


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help improving query performance

2015-02-04 Thread shawn l.green

Hello Larry,

On 2/4/2015 3:37 PM, Larry Martell wrote:

On Wed, Feb 4, 2015 at 3:25 PM, shawn l.green shawn.l.gr...@oracle.com wrote:

Hi Larry,


On 2/4/2015 3:18 PM, Larry Martell wrote:


On Wed, Feb 4, 2015 at 2:56 PM, shawn l.green shawn.l.gr...@oracle.com
wrote:


Hi Larry,


On 2/1/2015 4:49 PM, Larry Martell wrote:



I have 2 queries. One takes 4 hours to run and returns 21 rows, and
the other, which has 1 additional where clause, takes 3 minutes and
returns 20 rows. The main table being selected from is largish
(37,247,884 rows with 282 columns). Caching is off for my testing, so
it's not related to that. To short circuit anyone asking, these
queries are generated by python code, which is why there's an IN
clause with 1 value, as oppose to an =.

Here are the queries and their explains. The significant difference is
that the faster query has Using
intersect(data_cst_bbccbce0,data_cst_fba12377) in the query plan -
those 2 indexes are on the 2 columns in the where clause, so that's
why the second one is faster. But I am wondering what I can do to make
the first one faster.


4 hour query:

SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id



... snipped ...




Faster query:

SELECT MIN(data_tool.name) as tool,
  MIN(data_cst.date_time) start,
  MAX(data_cst.date_time) end,
  MIN(data_target.name) as target,
  MIN(data_lot.name) as lot,
  MIN(data_wafer.name) as wafer,
  MIN(measname) as measname,
  MIN(data_recipe.name) as recipe
FROM data_cst
INNER JOIN data_tool ON data_tool.id = data_cst.tool_id
INNER JOIN data_target ON data_target.id = data_cst.target_name_id
INNER JOIN data_lot ON data_lot.id = data_cst.lot_id
INNER JOIN data_wafer ON data_wafer.id = data_cst.wafer_id
INNER JOIN data_measparams ON data_measparams.id =
data_cst.meas_params_name_id
INNER JOIN data_recipe ON data_recipe.id = data_cst.recipe_id
WHERE data_target.id IN (172) AND
 data_recipe.id IN (148) AND
 data_cst.date_time BETWEEN '2015-01-26 00:00:00' AND '2015-01-26
23:59:59'
GROUP BY wafer_id, data_cst.lot_id, target_name_id


... snip ...




Thanks for taking the time to read this, and for any help or pointers
you can give me.



The biggest difference is the added selectivity generated by the WHERE
term
against the data_recipe table.

Compare the two EXPLAINS, in the faster query you see that data_recipe is
listed second. This allows the additional term a chance to reduce the
number
of row combinations for the entire query.

To really get at the logic behind how the Optimizer chooses its execution
plan, get an optimizer trace. Look at the cost estimates for each phase
being considered.
http://dev.mysql.com/doc/refman/5.6/en/optimizer-trace-table.html
http://dev.mysql.com/doc/internals/en/optimizer-tracing.html



Thanks very much Shawn for the reply and the links. I will check those
out and I'm sure I will find them very useful.

Meanwhile I changed the query to select from data_cst using the where
clause into a temp table and then I join the temp table with the other
tables. That has improved the slow query from 4 hours to 10 seconds
(!)



Did you also add an index to the temporary table for the JOIN condition? It
might make it even faster


No, I didn't. I (and the users) were so shocked and happy with the
massive improvement I moved on to make similar changes to other
queries.

This is a django app, and it's a one-shot deal - i.e. there's just the
one query run and the response is sent back to the browser and that's
the end of the session and the temp table. So I'm thinking it's
probably not worth it.

As an aside this change has messed up all my unit tests - they send
multiple requests, but they're all in the same session. So only the
first succeeds and the next one fails because the temp table already
exists. I haven't figured out how to get it run each request in its
own session. I guess I'm going to have to drop the temp table after I
join with it before I sent the response back.



If...
* it's a MEMORY temp table
* it's always the same table design

Then, you can use DELETE to clear the content (it's faster than DROP

Re: Upgrading How To

2014-12-26 Thread shawn l.green

Hi Grant,

On 12/26/2014 11:18 AM, Grant Peel wrote:

Reindl,

I am sorry, in my original post, I forgot to mention that the OLD box and
the NEW box are the same physical machine. I need to be able to save all
data into files on a memstick or portable disc and restore them to the newly
staged machine (with the new version of mysql).

-Grant



There are a few file-level storage changes between 5.x (where x  6) and 
5.6 that you may need to resolve before the upgrade.  Examples:


* 5.6 will not read any tables that were physically created in a version 
older than 5.0 and never rebuilt using a newer version.


* the YEAR(2) data type is no longer supported.

* pre 4.1 passwords - If you are upgrading from version 5.1 or older, 
you will need to update their hashes or configure 5.6 to recognize the 
older hashes as valid. The user authentication system in 5.6 is more 
advanced than in earlier versions.


Several features are removed as of 5.6
http://dev.mysql.com/doc/refman/5.6/en/mysql-nutshell.html#mysql-nutshell-removals

Many defaults were changed starting with 5.6. These and other things to 
consider before a move to 5.6 (like the SQL Mode and timestamp 
behaviors) are all listed here:

http://dev.mysql.com/doc/refman/5.6/en/upgrading-from-previous-series.html

mysql_upgrade will update the system tables in the `mysql` database and 
run a CHECK TABLE ... FOR UPGRADE on all your tables but it cannot 
handle all of the possible upgrade issues you may encounter due to the 
other things about the server that may have changed.


Reindl's technique with the rsync is just like what you are doing with 
your full-image save/restore. His is just optimized for operating 
between two live machines.


You are also very strongly encouraged to test the upgrade to 5.6 on a 
lab box long before you push it into production. This will give you the 
chance to find any of those new 5.6 changes that your clients may not be 
ready to handle.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Function

2014-12-17 Thread shawn l.green



On 12/12/2014 2:38 PM, Alexander Syvak wrote:

Hello!

How is actually a function done internally in MySQL after CREATE FUNCTION
statement?

Why can't there be a dynamic SQL inside a function?



Sorry for the delay.


The answer is embedded in this description of what is or is not allowed 
within a function:


from: http://dev.mysql.com/doc/refman/5.6/en/create-procedure.html

Stored functions may not contain statements that perform explicit or implicit 
commit or rollback. Support for these statements is not required by the SQL 
standard, which states that each DBMS vendor may decide whether to permit them.


Since we cannot easily restrict the types of commands generated by 
dynamic SQL within a function, we simply disallowed those as part of the 
design.


This and several other restrictions for functions are also listed here
http://dev.mysql.com/doc/refman/5.6/en/stored-program-restrictions.html

The gist of all of these restrictions is that a FUNCTION shall create 
the least side effects possible while generating the result value.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: DB redolog

2014-12-17 Thread shawn l.green

Hi Frank,

On 12/17/2014 2:11 AM, xiangdongzou wrote:

HI all:

As we know,when we shutdown the database cleanly,the database can
do a checkpoint.So we don't need redo log againg.In mysql(innodb),we can
restart normaly.But oracle database also need redo log group(current), why?




While someone on this list probably knows the answer to your question, 
this list is in support of the MySQL database system and its related 
products. As you correctly identified, the InnoDB storage engine starts 
back up without any problems.


In fact, the REDO log is there for recovery purposes only. During a 
normal startup following a normal or slow shutdown, it is not required 
at all. During a recovery restart, any transactions that were logged but 
not yet checkpointed into the physical data file(s) are handled then. 
This gives us the best chances of reaching a fully consistent state 
after some kind of dirty shutdown event (crash, power failure, disk 
failure, ...)


This link describes the method that must be followed in order to erase 
the logs to allow the server to generate new ones on the next restart. 
This is usually done to change the size, number, or location of the REDO 
log files.

http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

Please let us know if you have any other questions about MySQL.

Yours,

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: update and control flow

2014-12-09 Thread shawn l.green

Hello Martin,

On 12/9/2014 9:25 AM, Martin Mueller wrote:

I'm trying to get my feet wet with 'if' and 'when' uses in mysql. it would
be very useful for update operations, but I can't get it right.

If I read the documentation correctly, it should be possible to say
something like

UPDATE X

if WORD like 'a%' SET COMMENT = 'a'
elseif WORD like 'b%' SET COMMENT = 'b'
END IF


But this gives me an error message. What am I doing wrong?



The correct syntax is to put the function after the = sign. The column 
name must appear by itself on the left side of the equation. You must 
also use the function-format of IF or a CASE..END construction.


UPDATE X
SET COMMENT = IF(WORD like 'a%','a',if(WORD like 'b%', 'b',COMMENT))

UPDATE X
SET COMMENT = CASE
  WHEN WORD like 'a%' then 'a'
  WHEN WORD like 'b%' then 'b'
  ELSE COMMENT
  END

But, as you noted, it may be more efficient to simply run two UPDATE 
statements each with the appropriate WHERE clause to limit the changes 
to just those rows that match your conditions.


UPDATE X
SET COMMENT = 'a'
WHERE WORD like 'a%'

And you can combine both techniques to limit the scope of the UPDATE to 
just the rows to change by matching either pattern.


UPDATE X
SET COMMENT = IF(WORD like 'a%','a','b')
WHERE WORD like 'a%' or WORD like 'b%'

Note: this last format doesn't need the second if() in the 'else' 
portion of the first IF() function because the set of rows to be 
operated on is already limited by the WHERE clause. The rows will match 
one condition or the other but not neither.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: forum vs email

2014-12-09 Thread shawn l.green



On 12/9/2014 9:10 PM, h...@tbbs.net wrote:

2014/12/09 15:20 -0600, Peter Brawley 

Nope.

And why not? Because no one bothered to implement it? Now I (for the first time?) looked at 
forums.mysql.com and see more topics than on lists.mysql.com. The former is 
just more with-it, I guess.

I believ that one could both by e-mail and through a webbrowser comment on a 
Google group.

And one who reads  sends e-mail through a webbrowser surely considers 
discussion through e-mail simplie more overhead than using his webbrowser for 
discussion, too. I further suspect e-mail clients on own computers are not in 
fashion.



Well, the Forum does provide a bit less permanence than subscribing to a 
list. You can login, post your questions, then disappear without 
worrying about future emails about topics you may never be interested in.


I guess we (the list members) are more dedicated than the forum users 
because we all recognize the usefulness of seeing a broad range of 
topics presented in an easily filterable and save-able format (email) 
over the web-based content of the forums.


I also find it easier to monitor the emails than the forum simply 
because the most recent response to a list topic does not automatically 
reposition the topic to the top of the list.  It's harder to lose a 
question in the noise when I can see what I have tagged as read/unread. 
I can't do that in the forums.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: table-for-column

2014-12-04 Thread shawn l.green



On 12/1/2014 6:09 AM, Johan De Meersman wrote:

- Original Message -

From: peter brawley peter.braw...@earthlink.net
Subject: Re: table-for-column



I wonder if anyone knows why sites like Stack Overflow and those of ours
I mentioned are seeing more volume, while this list and all MySQL fora
are seeing much, much less.


The major benefit I see on StackOverflow and the like, is the rating system on 
the comments, and to some extent the rating system for users.

On the other hand, I find that the signal-to-noise ratio on older media like 
mailing lists and IRC tends to be much more favourable, presumably because it 
is where the dinosaurs dwell :-)




A lot of new users may only use the MySQL Forums and not even know about 
this mailing list. I guess this email-based peer-to-peer exchange is 
slowly disappearing into the background like the old usenet newsgroups, eh?


http://forums.mysql.com/

My problem is a lack of time. I can monitor the mailing lists or the 
forums but rarely both while still doing my regular job of handling the 
official service requests.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Help optimize query.

2014-12-01 Thread shawn l.green

Hello Mimko,

Sorry for the late reply. I had a bunch of work to take care of before 
vacation, then there was the vacation itself. :)


On 11/13/2014 2:34 PM, Mimiko wrote:

Hello. I have this table:

  show create table cc_agents_tier_status_log:
CREATE TABLE cc_agents_tier_status_log (
   id int(10) unsigned NOT NULL AUTO_INCREMENT,
   date_log timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
   cc_agent varchar(45) NOT NULL,
   cc_agent_tier_status_id tinyint(3) unsigned NOT NULL,
   cc_queue_id tinyint(3) unsigned NOT NULL,
   cc_agent_id int(10) unsigned NOT NULL,
   cc_agent_phone smallint(5) unsigned NOT NULL,
   cc_agent_domain varchar(45) NOT NULL DEFAULT 'pbx01.apa-canal.md',
   PRIMARY KEY (id),
   KEY IDX_cc_agents_tier_status_log_2 (cc_agent) USING HASH,
   KEY IDX_cc_agents_tier_status_log_3 (date_log),
   KEY FK_cc_agents_tier_status_log_2 (cc_agent_id),
   KEY FK_cc_agents_tier_status_log_3 (cc_queue_id),
   KEY FK_cc_agents_tier_status_log_1 (cc_agent_tier_status_id)
USING BTREE,
   KEY IDX_cc_agents_tier_status_log_7 (id,date_log),
   CONSTRAINT FK_cc_agents_tier_status_log_1 FOREIGN KEY
(cc_agent_tier_status_id) REFERENCES cc_agent_tier_status_chart
(id) ON UPDATE CASCADE,
   CONSTRAINT FK_cc_agents_tier_status_log_2 FOREIGN KEY
(cc_agent_id) REFERENCES apacanal.employee (id) ON UPDATE CASCADE,
   CONSTRAINT FK_cc_agents_tier_status_log_3 FOREIGN KEY
(cc_queue_id) REFERENCES cc_queues (id) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=23799 DEFAULT CHARSET=ascii

  show index from cc_agents_tier_status_log:
TableNon_uniqueKey_nameSeq_in_indexColumn_name
Collation CardinalitySub_partPackedNullIndex_type
CommentIndex_comment
cc_agents_tier_status_log0PRIMARY1idA
23999(null)BTREE(null) (null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_21
cc_agentA260(null)BTREE(null)(null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_31
date_logA23999(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_21
cc_agent_idA2(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_31
cc_queue_idA14(null)BTREE(null)(null)
cc_agents_tier_status_log1FK_cc_agents_tier_status_log_11
cc_agent_tier_status_idA2(null)BTREE
(null)(null)
cc_agents_tier_status_log1 IDX_cc_agents_tier_status_log_71
idA23999(null)BTREE(null)(null)
cc_agents_tier_status_log1IDX_cc_agents_tier_status_log_72
date_logA23999(null)BTREE(null)(null)

And the query is:
 set @enddate:=now();
 set @startdate:='2014-11-01';
 set @que_id:=-1;
select s.theHour as theHour,avg(s.nrAgents) as nrAgents from
(select date(a.theDateHour) as theDate,extract(hour from a.theDateHour)
as theHour,count(c.cc_agent_tier_status_id) as nrAgents
from (

select dh.theDateHour as theDateHour, max(c.date_log) as maxdatelog,c.*
FROM
( select concat(d.thedate,' ',h.theHour,':0:0') as theDateHour
from
( select DATE(DATE_ADD(date(@startdate), INTERVAL @i:=@i+1 DAY) ) as
theDate from (select @i:=-1) as t1
inner join cc_member_queue_end_log b on 1=1 and
b.id=datediff(@enddate,@startdate)+1 ) as d
left outer join
(SELECT 0 AS theHour UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL
SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION
ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 UNION ALL SELECT 10
UNION ALL SELECT 11 UNION ALL SELECT 12 UNION ALL SELECT 13 UNION ALL
SELECT 14 UNION ALL SELECT 15 UNION ALL SELECT 16 UNION ALL SELECT 17
UNION ALL SELECT 18 UNION ALL SELECT 19 UNION ALL SELECT 20 UNION ALL
SELECT 21 UNION ALL SELECT 22 UNION ALL SELECT 23) as h
on 1=1 ) AS dh
left outer join
cc_agents_tier_status_log as c
on c.date_log=dh.theDateHour where (if(@queue_id0,1,0) or
if(@queue_id=c.cc_queue_id,1,0))
group by dh.theDateHour,c.cc_queue_id,c.cc_agent_id,c.cc_agent_phone


) as a
left outer join cc_agents_tier_status_log as c
on c.date_log=a.maxdatelog and c.cc_queue_id=a.cc_queue_id and
c.cc_agent_id=a.cc_agent_id and c.cc_agent_phone=a.cc_agent_phone and
c.cc_agent_tier_status_id=2
group by a.theDateHour
order by date(a.theDateHour),extract(hour from a.theDateHour))
as s
group by s.theHour
order by s.theHour;


This query takes 20 seconds to populate.

Table cc_agents_tier_status_log contains log entries of agent_id
login/logout per queue per phone. status_id can have value 1 (logged
out) and 2 (login) at date_log datetime.

The resulting table must contain average number of agents logged in at
every hour per startdate to enddate.

Hope for some hints. Thank you.


The first problem is that you are generating a lot of extra rows before 
you actually need them. The only place where you should be faking the 

Re: mysqldump with single-transaction option.

2014-10-06 Thread shawn l.green

Hello Geetanjali,

On 9/23/2014 7:14 AM, geetanjali mehra wrote:

Can anybody please mention the internals that works when we use mysqldump
as follows:


*mysqldump --single-transaction --all-databases  backup_sunday_1_PM.sql*

MySQL manual says:

This backup operation acquires a global read lock on all tables at the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as this lock
has been acquired, the binary log coordinates are read and the lock is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is issued,
the backup operation may stall until those statements finish. After that,
the dump becomes lock-free and does not disturb reads and writes on the
tables.

Can anyone explain it more? Please.



Which part would you like to address first?

I have a feeling it's more about how FLUSH TABLES WITH READ LOCK works 
but I want to be certain before answering.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: When will MySQL support the ANSI SQL MERGE command?

2014-09-19 Thread shawn l.green


Hello ccleve,

On 9/19/2014 3:06 PM, ccleve wrote:
I need to do upserts and I need cross-database compatibility. I'd hate 
to drop support for MySQL in my product. Does MySQL plan to support 
the ANSI-standard MERGE command for upserts?





You appear to be looking for one of these commands that MySQL does support.

INSERT... SELECT ... ON DUPLICATE KEY UPDATE ...
http://dev.mysql.com/doc/refman/5.6/en/insert.html
http://dev.mysql.com/doc/refman/5.6/en/insert-on-duplicate.html

REPLACE... SELECT ...
http://dev.mysql.com/doc/refman/5.6/en/replace.html

You can file a feature request to add a new command (MERGE) to the 
parser, here.

http://bugs.mysql.com/

But the odds are low that we will add the predicate MERGE to our 
syntaxes simply because we already have a storage engine called MERGE.

http://dev.mysql.com/doc/refman/5.6/en/merge-storage-engine.html


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: table comments

2014-09-07 Thread shawn l.green

Hello Martin,

On 9/7/2014 7:42 PM, Martin Mueller wrote:


The TABLES table in MySQL's information_schema has a TABLE_COMMENT column. 
Could one use that for ad hoc and manual annotation of that table? And if so, 
could one change its length?  Or are there better ways of producing table notes 
that are kept with the database, as opposed to Evernote or some notebook where 
you never find it again?


Martin Mueller
Professor emeritus of English and Classics
Northwestern University



Every table, every database, and every column all have places in their 
definitions to place a comment. No, the sizes of the columns cannot be 
changed. Sorry.


To see how to set a comment, check out the appropriate ALTER or CREATE 
command for the object you want to annotate.

http://dev.mysql.com/doc/refman/5.6/en/sql-syntax-data-definition.html

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: purge thread

2014-08-27 Thread shawn l.green

Hello Geetanjali,

On 8/26/2014 1:16 AM, geetanjali mehra wrote:

I want to understand how to tune  innodb_max_purge_lag
http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_max_purge_lag
when history list length is high.


Could anyone explain me.


Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Trainer and Database Security
Specialist



This setting not related to the history value. It is there to help 
control the gap between these two values of the InnoDB status report


Trx id counter 0 290328385
Purge done for trx's n:o  0 290315608 undo n:o  0 17

What is the current transaction compared what is the oldest transaction 
that still has aged copies of data left in the data area of the 
tablespace. The difference between those two values is the purge backlog


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: fragmentation in innodb index

2014-08-27 Thread shawn l.green

Hello Geetanjali,

On 8/26/2014 1:11 AM, geetanjali mehra wrote:

Hello to all,

I want to know whether my innodb index is fragemented. Is it possible to
know?

Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Trainer and Database Security
Specialist



Just like every other piece of data stored in an InnoDB tablespace, the 
index data is stored in pages. At maximum capacity, 15KB of the 16KB 
assigned to each page can consist of data.  At worst, about half of a 
16K page will contain data. This is because each page is one leaf in a 
BTREE structure.


If you add data to a page and you would exceed that 15K limit, we would 
need to split that page. That means that each page (the existing page 
and the one we just created) will now have a near-equal share of the 
data that was on the original page.


That empty space is reused as much as possible to avoid another page 
split.  If removing data from a table makes it possible to combine two 
adjacent leaves in the B-tree, we will. The page that once held the 
extra information is marked as 'available' and it can be filled with 
more index information later or filled with actual table data.


A page is a page is a page. InnoDB decides what goes on a page.

So...
Is an index ever fragmented? No (depending on your definition of 
fragmentation)


Will there ever be some free space within the index tree? Always.

Can index pages be scattered (non-contiguous) within a tablespace file? 
Yes.


Will rebuilding a table ensure that the index pages are made contiguous? 
No.


Do these same answers apply to the actual data stored on a table? Yes.

http://dev.mysql.com/doc/refman/5.6/en/innodb-file-defragmenting.html
http://dev.mysql.com/doc/refman/5.6/en/innodb-file-space.html

Does that help?
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: next-key lock

2014-08-27 Thread shawn l.green



On 8/26/2014 1:12 AM, geetanjali mehra wrote:

Hello to all,
In repeatable read isolation level, when we issue:

Select * from new where c1 between 12 and 17 for update;

this range will be locked by innodb by using next-key locks.

But, why is is preventing any other session to insert any value beyond that
range; any value above the range and any value below the range. I am unable
to understand this.


I believe you are confusing gap locking (the space between the values) 
and next-key locking (the space after the range).


http://dev.mysql.com/doc/refman/5.6/en/innodb-record-level-locks.html

See also:
http://dev.mysql.com/doc/refman/5.6/en/innodb-next-key-locking.html
http://dev.mysql.com/doc/refman/5.6/en/innodb-locks-set.html



Best Regards,
Geetanjali Mehra
Senior Oracle and MySQL DBA Corporate Trainer and Database Security
Specialist



Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: inconsistent optimization

2014-08-20 Thread shawn l.green

Hi Jim,

On 8/20/2014 11:04 AM, Jim wrote:

Without going into specific details on queries...

Using mysql 5.1 as provided with CentOS6, I've noticed some queries
providing what I can best explain as inconsistent optimization. The
database can be quieted to just controlled queries and at times the same
query will return very quickly when at other times may take minutes.

I don't see the same behavior with mysql5.0 under CentOS5. The same
queries on the same data returns quickly consistently.

When the queries run slowly they show in a process list as either in a
copy to temp table or sending data state. At first I thought query
restructuring to avoid the copy to temp table was a path to a solution,
but now I don't think so since the same query changed so that it no
longer needs a temp table will sit in the sending data state for a
long time.

The queries do eventually come back with correct results, but it takes
minutes rather than milliseconds (sometimes slow; sometimes fast).

Have others seen this behavior? Any explanations?
Any reading to point to for further understanding?



Fluctuations in query times can be the results of configuration mistakes 
(like creating a 1GB query cache or a tiny InnoDB Buffer Pool), or data 
changes (did you add or remove or change a bunch of rows), or query 
patterns (did you add or remove terms from your WHERE clauses, did you 
change which columns were in your SELECT clause, ... ).


To know why a query is doing what it is doing, you need to ask the 
Optimizer. The Optimizer is that part of the server that works out the 
most efficient way to go get the data you are asking for and how to 
process that data once it is pulled from disk or cache.


This is the purpose of the EXPLAIN operator. Just put that word before 
SELECT and see what you get.  An explanation of how to interpret an 
EXPLAIN report is here in the manual (you are reading the manual, right?)

http://dev.mysql.com/doc/refman/5.1/en/explain.html
http://dev.mysql.com/doc/refman/5.1/en/execution-plan-information.html

That will give you a starting place. After that, you can refer to the 
other sections of the Optimization chapter to see what you can or 
should be changing to improve your performance.


http://dev.mysql.com/doc/refman/5.1/en/optimization.html

You should also need to learn a little bit about the topic of index 
statistics as those are what the Optimizer uses to develop its 
execution plans.


http://dev.mysql.com/doc/refman/5.1/en/analyze-table.html
http://dev.mysql.com/doc/refman/5.1/en/show-index.html
http://dev.mysql.com/doc/refman/5.1/en/innodb-restrictions.html (search 
for ANALYZE TABLE determines index cardinality...)

http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_stats_sample_pages
http://dev.mysql.com/doc/refman/5.1/en/optimizer-issues.html


Feel free to ask the list any questions that may arise in your research.

Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: how to access Synology's mysql (mariadb) on the command line

2014-08-20 Thread shawn l.green

Hi Wybo,

On 8/20/2014 3:47 PM, Wybo wrote:

My Synology station is on 192.168.178.27,
the database listens to port 3306,
on my FritzBox I forwarded port 3306 to 192.168.178.27,
I /can/ connect to the database on http://192.168.178.27/phpMyAdmin/
But when I try:

mysql --host=192.168.178.27 --password=* --user=wybo

I get:

ERROR 1045 (28000): Access denied for user 'wybo'@'wybo.fritz.box'
(using password: YES)

What am I doing wrong?


Access is granted only if three parts are correct:
1) the login you are using (wybo)
2) the password for the login
3) the host you are connecting from (wybo.fritz.box) is allows to use 
that account.


It's #3 that most people forget about.  Run this query

SELECT host FROM mysql.user WHERE user='wybo';

If you see a pattern in the results that would match your host's name, 
then you need to compare your password hashes. If you don't know if you 
have a matching host pattern, post the list of host patterns you got 
from the query to the list. We can tell you.


Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: how to access Synology's mysql (mariadb) on the command line

2014-08-20 Thread shawn l.green

Hello Wybo,

I cleansed your reply and cc:'ed the list again to share the answer.

On 8/20/2014 4:24 PM, Wybo wrote:

Hi Shawn,

Thanks for your prompt reply - I suppose I'll have to do that query via
phpMysqlAdmin. When I do that, the only host that appears is localhost.
However, when I browse the user table, I also see %edited%, which is the
hostname of the synology station, see the attached screenshot (%also edited%). 
Does this
mean that I have to add a new entry in this table? If so, can I do that
via phpMysqlAdmin?



Yes, you will need to use your phpMysqlAdmin session to issue an 
appropriate GRANT command so that the 'wybo' user can login from 
'wybo.fritz.box'.


Example -

GRANT the permissions you want to give on *.* to 
'wybo'@'wybo.fritz.box' IDENTIFIED BY 'password goes here in plain text'


Research the GRANT command itself (and the other account management 
commands) to see what else you can do while creating an account or 
adjusting permissions.

http://dev.mysql.com/doc/refman/5.6/en/account-management-sql.html

Examples of the types of host patterns you can use are also in the 
manual, here:

http://dev.mysql.com/doc/refman/5.6/en/account-names.html

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Simplifying Queries

2014-07-20 Thread shawn l.green

Hello Surya,

Part of the problem may be that you are so focused on the details that 
might have lost sight of the purpose.


On 7/12/2014 8:24 AM, Surya Savarika wrote:

Hi,
I have two query series that I wonder whether they can be compacted
into a single query:

FIRST QUERY SERIES

   cursor.execute(select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
  where b.BooksDataID!=b.BooksDataID2 and b.ReligionsID=%s, 
(rel_id,))


Are you trying to find the names of any supersets that contain any book 
that has a certain ReligionsID value? (list1)



   tmp = cursor.fetchall()
   cursor.execute(select d.ID from books_data as d join books as
b on d.ID=b.BooksDataID2 join books_compilations as c
  on d.ID=c.BooksDataID where b.ReligionsID=%s, (rel_id,))



Are you trying to find a list of compilations that contain any books 
that has a certain ReligionsID value? (list2)




   junk_ids = [itm[0] for itm in cursor]
   poss_books_data = []
   for id, name, ss_id in tmp:
 if id not in junk_ids:
   poss_books_data.append([id, name, ss_id])



This seems to be a process by which you determine if there are any books 
in list 1 (the first query) that are not in list 2 (the second query).


Did I understand that correctly?



SECOND QUERY SERIES

   cursor.execute(select ReligionsID from books where
BooksDataID=%s, (tmp_ids[0],))
   rel_id = cursor.fetchone()[0] # The first entry will always give
the correct value


Determine the ReligionsID for a particular book.


   cursor.execute(select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
  where b.BooksDataID!=b.BooksDataID2 and b.ReligionsID=%s, 
(rel_id,))


Find all the related books that share the same ReligionsID value.



   tmp = cursor.fetchall()
   cursor.execute(select d.ID from books_data as d join books as
b on d.ID=b.BooksDataID2 join books_compilations as c
  on d.ID=c.BooksDataID where b.ReligionsID=%s, (rel_id,))



Find any compilations that contain the same ReligionsID value.

Did I decode those questions properly?


I don't know that they're necessary, but here are the table definitions:

mysql describe books;
+-++--+-+-++
| Field   | Type   | Null | Key | Default | Extra  |
+-++--+-+-++
| ID  | int(11)| NO   | PRI | NULL| auto_increment |
| ReligionsID | int(11)| NO   | MUL | NULL||
| PrimaryReligion | tinyint(1) | YES  | | 0   ||
| BooksDataID | int(11)| NO   | | NULL||
| BooksDataID2| int(11)| YES  | | NULL||
| SupersetID  | int(11)| YES  | | NULL||
+-++--+-+-++
6 rows in set (0.09 sec)

mysql describe books_data;
++--
---+--+-+-+-
---+
| Field  | Type
| Null | Key | Default |
Extra  |
++--
---+--+-+-+-
---+
| ID | int(11)
| NO   | PRI | NULL|
auto_increment |
| Name   | varchar(30)
| NO   | | NULL|
|
| Label  | enum('Cannonised','Uncannonised, fully accepted','Uncannonised, \
n not fully accepted','Uncannonised, controversial') | NO   | | NULL
 ||
| PrimaryKey | tinyint(1)
| YES  | | 0   |
|
++--
---+--+-+-+-
---+
4 rows in set (0.13 sec)

mysql describe books_compilations;
+-+-+--+-+-++
| Field   | Type| Null | Key | Default | Extra  |
+-+-+--+-+-++
| ID  | int(11) | NO   | PRI | NULL| auto_increment |
| Name| varchar(30) | NO   | | NULL||
| SupersetID  | int(11) | NO   | | NULL||
| BooksDataID | int(11) | NO   | | NULL||
+-+-+--+-+-++
4 rows in set (0.20 sec)



If you can verify that I have correctly stated 

Re: How to get all known bugs on specified mysql version?

2014-07-07 Thread shawn l.green

Hello,

On 7/6/2014 7:42 PM, 娄帅 wrote:

Hi, all,

I want to use MySQL 5.6.18 in production, so i want to get all known bugs
on this version. Now i check the bugfix section in MySQL 5.6.19 and above
version. Am i doing the right thing? Or Do you have a better method?

Any input will be appreciated!




The easiest way to see which bugs are pending is to actually search the 
bugs database, http://bugs.mysql.com


Here is one example search you can perform. Note, many of these bugs 
(such as bug #49728) are low-impact edge-cases that are prioritized 
lower than other more important high-impact bugs. Our development teams 
have limited resources.  We triage and prioritize what they can work on 
based on how severe the bug is, how often is may be encountered, and how 
easy it may be to work around.


http://bugs.mysql.com/search.php?search_for=status[]=Activeseverity=limit=Allorder_by=idcmd=displayphpver=5.5os=0os_details=bug_age=0tags=similar=target=last_updated=0defect_class=allworkaround_viability=allimpact=allfix_risk=allfix_effort=alltriageneeded=

Anyone can join the fight! Start by submitting an OCA (Oracle 
Contributor's Agreement).

http://www.oracle.com/technetwork/community/oca-486395.html
If you have any questions about the OCA, please contact the MySQL 
community team.

http://www.mysql.com/about/contact/?topic=community


Then, any patches you provide can be analyzed, possibly improved, and 
potentially merged into the actual source code.  For some recent 
examples, see:

http://www.tocker.ca/2014/06/09/mysql-5-6-19-community-release-notes.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-27 Thread shawn l.green

Hello Antonio,

On 6/27/2014 9:31 AM, Antonio Fernández Pérez wrote:

​Hi Reindl,

Thanks for your attention.

Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after execute an analyze table
command on some fragmented table, after that, appears fragmented again.

Regards,

Antonio.​



InnoDB operates by storing multiple rows on pages. Each page is 16K. 
Of that 1K is reserved for metadata (a tiny index showing where on a 
page each row sits, links to various other locations, checksums,  ...) 
The remaining 15K can be used for your actual data.


If you delete a row of data, that space on a page is made available but 
the page does not change size. It is always 16K.


InnoDB stores data in the order of your PK.  If you need to insert a new 
row between other rows on a 'full' page, then the page needs to split. 
This creates 2 new pages that are about 50% full.


If two adjacent pages (A and B) become too 'empty' they can be combined 
into one page. This puts the data from both pages onto one of them (page 
A, for example). However page B remains empty and becomes available for 
any other purpose.


Is that what you are calling 'fragmentation' ?

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-24 Thread shawn l.green

Hello Antonio,

On 6/24/2014 7:03 AM, Antonio Fernández Pérez wrote:

​Hi list,

I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.

​​SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN (information_schema,mysql) AND
Data_free  0​

After that, I have seen that there are 49 fragmented tables. With one
table, I have executed optimize table table_name; and analyze table
table_name;. The result is the same, the table continuos fragmented.

Any ideas? I have followed the mysqltuner recomendations ...

Thanks in advance.

Regards,

Antonio.



It makes a huge difference if the tables you are trying to optimize have 
their own tablespace files or if they live inside the common tablespace.


http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Optimizing InnoDB tables

2014-06-24 Thread shawn l.green

Hello Reindl,

On 6/24/2014 3:29 PM, Reindl Harald wrote:



Am 24.06.2014 21:07, schrieb shawn l.green:

It makes a huge difference if the tables you are trying to optimize have their 
own tablespace files or if they live
inside the common tablespace.

http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table


which is the most stupid default in case of innodb and only survivable
without a lot of work for people who realize that *before* start
operations and enable innodb_file_per_table from the very begin

having defaults which can't be changed later without complete re-import
of data and prevent from ever get disk space for long ago deleted data
free is the most wrong thing a software developer can do



The tables can be moved from the common tablespace into their own 
tablespace at any time after the option is enabled. The space they once 
occupied within the primary tablespace will remain and it will be marked 
as 'available' for any general purpose (such as the UNDO log)


The only way to shrink the primary tablespace is, as you correctly 
described, through a dump/restore of your data. This process to resize 
the primary tablespace (such as to shrink it) must be followed precisely 
or problems will result.


http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: access denied fpr user 'root'@'localhost' (using password: NO)

2014-05-29 Thread shawn l.green

Hello Érico

On 5/29/2014 2:22 PM, Érico wrote:

I have ran the following to test a fix for an app issue :

delete from mysql.user where user='';
2lines got effected

after this I can´t connect through command line anymore :

./mysqladmin -u root password pwd

I get access denied for user 'root'@'localhost (using password:'NO')

how can I restore the db so I can connect through command line again ?

thks



What that tells me is that you were never actually logging in as root 
but the system was authenticating you as the 'anonymous' user. Quoting 
from the very fine manual:


http://dev.mysql.com/doc/refman/5.6/en/account-names.html

A user name is either a nonblank value that literally matches the user 
name for incoming connection attempts, or a blank value (empty string) 
that matches any user name. An account with a blank user name is an 
anonymous user. To specify an anonymous user in SQL statements, use a 
quoted empty user name part, such as ''@'localhost'.




http://dev.mysql.com/doc/refman/5.6/en/connection-access.html

Identity checking is performed using the three user table scope columns 
(Host, User, and Password). The server accepts the connection only if 
the Host and User columns in some user table row match the client host 
name and user name and the client supplies the password specified in 
that row.

...
If the User column value is nonblank, the user name in an incoming 
connection must match exactly. If the User value is blank, it matches 
any user name. If the user table row that matches an incoming connection 
has a blank user name, the user is considered to be an anonymous user 
with no name, not a user with the name that the client actually 
specified. This means that a blank user name is used for all further 
access checking for the duration of the connection (that is, during 
Stage 2).

...
If you are able to connect to the server, but your privileges are not 
what you expect, you probably are being authenticated as some other 
account. To find out what account the server used to authenticate you, 
use the CURRENT_USER() function.



That same page in the manual (and its siblings) should also answer your 
questions as to how MySQL uses the `user` table, what the empty `user` 
and `password` column mean to login attempts, and how to configure 
SSL-based connections.


If you have forgotten your actual root@localhost password, you can reset 
it following one of the procedures provided here.

http://dev.mysql.com/doc/refman/5.6/en/resetting-permissions.html

Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: access denied fpr user 'root'@'localhost' (using password: NO)

2014-05-29 Thread shawn l.green

Hello Érico,

On 5/29/2014 3:51 PM, Érico wrote:

I am really sorry  about this one ..
the connection is ok ...

I had not checked that I was using mysqladmin instead of mysql

now please how can I check what is wrong with my application ( My SQL Admin
)

at its login page it asks for user / pwd / server and db

using both localhost and 127.0.01 ... it gets the same error :
access denied for user  'root'@'localhost'  

the app has a php config page where it fills these info

I am able to connect to it manually too using :
./mysql -h localhost -u root -pmy_pwd mysql-admin

but the app keeps geting the access denied error

would it be sometihng related to my /et/hosts ?

its content :

127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
fe80::1%lo0 localhost
127.0.0.1 mysqld
127.0.0.1 mac
localhost mac


my SO is a mac os 10.6.8

Thks Again !!
... snip ...


What is the result of this query:

SELECT user, host, length(password) from mysql.user;

What hapens if you change your login to this? (you should not put your 
passwords on your command lines if you can avoid it

http://dev.mysql.com/doc/refman/5.6/en/password-security-user.html
http://dev.mysql.com/doc/refman/5.6/en/connecting.html
)

./mysql -h 127.0.01 --port=3306 --protocol=TCP -u root -p mysql-admin

See also:
http://dev.mysql.com/doc/refman/5.6/en/access-denied.html

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Advices for work with big tables

2014-05-16 Thread shawn l.green

Hello Antonio,

On 5/16/2014 9:49 AM, Antonio Fernández Pérez wrote:

Hi,

I write to the list because I need your advices.

I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.



How you design your tables can have a huge impact on performance. If you 
are frequently doing aggregation (GROUP BY...) queries for reports, then 
you may need to have your data pre-aggregated at various levels. 
Perhaps, as an example, you want to run a weekly report of how often 
someone logs in. Every day, you have an average of 100 users each 
logging in 10 times a day. That is 1000 rows of connection information. 
 Multiply that and you have 7 rows, multiply that by a year and you 
have 365000 rows (appx)


If you create a table or set of tables where you have already summarized 
your most frequently used data for example (login, date, total minutes 
connected for that date, total number of connections for that day, ... ) 
then you have reduced how much work your weekly report needs to do from 
7 rows to just 7.  How much faster would that be?


Each day, you add the previous day's totals to your summary tables.

For more information on how to do this kind of pre-computation analysis 
and optimization, do some research on the topic of OLAP (online 
analytical processing)

http://en.wikipedia.org/wiki/OLAP


How can I do to have a fluid job with this table?



Stop trying to use just the one table for everything?



My server works with disk cabin and I think that sharding and partitioning
are technologies that not applies. Work with a lot of data produces that
there are some slow query, even with the correct indexes created.



Partition pruning is a very good way of improving query performance. The 
trick is to design your partitions to match the majority of your query 
patterns.


http://dev.mysql.com/doc/refman/5.6/en/partitioning.html



So, one option is to delete data but, I use a RADIUS system to authenticate
and authorize users to connect to Internet. For this reason I need work
with almost all data. Another solution is increase the server resources.

Any ideas?



See above.



Thanks in advance.

Regards,

Antonio.



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: multilple mysql engines, one set of shared table spaces?

2014-05-14 Thread shawn l.green

Hello Bruce,

On 5/14/2014 2:11 PM, Bruce Ferrell wrote:

OK, put away the flamethrowers, I KNOW it's dumb.

I've been asked for the upteenth time is this possible and if so under
what conditions?

So I pose the question to the community, is it? Under what conditions?
Is it reliable or not?

Are there authoritative references to support the answers?

Inquiring minds want to know

Thanks in advance

Bruce Ferrell




To provide confirmation that sharing files is a 'bad idea' between any 
two running mysqld binaries, here are the instructions in the manual on 
how to have two or more mysqld instances (which can be the same program 
or two or more different versions of mysqld) running on your machine at 
the same time. Consider a shared disk as being part of the same machine 
as it's the files that really matter in your situation.


http://dev.mysql.com/doc/refman/5.6/en/multiple-servers.html

Really. I mean it. Don't do it.

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: multilple mysql engines, one set of shared table spaces? (addendum)

2014-05-14 Thread shawn l.green



On 5/14/2014 3:45 PM, shawn l.green wrote:

Hello Bruce,

On 5/14/2014 2:11 PM, Bruce Ferrell wrote:

OK, put away the flamethrowers, I KNOW it's dumb.

I've been asked for the upteenth time is this possible and if so under
what conditions?

So I pose the question to the community, is it? Under what conditions?
Is it reliable or not?

Are there authoritative references to support the answers?

Inquiring minds want to know

Thanks in advance

Bruce Ferrell




To provide confirmation that sharing files is a 'bad idea' between any
two running mysqld binaries, here are the instructions in the manual on
how to have two or more mysqld instances (which can be the same program
or two or more different versions of mysqld) running on your machine at
the same time. Consider a shared disk as being part of the same machine
as it's the files that really matter in your situation.

http://dev.mysql.com/doc/refman/5.6/en/multiple-servers.html

Really. I mean it. Don't do it.



However, if what you want to do is have two MySQL instances setup to 
point to the same files from the same machine or different host machines 
(such as in a shared SAN disk), you can do that but only one (and I do 
mean exactly one) of them may be started up at a time.  This is known as 
an Active/Passive configuration and it is one of our published HA options.


http://www.mysql.com/content/download/id/284/

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: libmysql.lib

2014-04-29 Thread shawn l.green

Hello David,

On 4/29/2014 4:14 PM, David Clark wrote:

mysql is open source as I understand it.

I have one project out of about 6 that is failing.

I want to debug into libmysql.lib to get a better idea what might be going on...

even if it is a problem in my code.  Where might I find the source/project files
to to this?

Thank you,

David Clark



The source packages for each Community release are available from the 
same site as the binary package downloads. Just change which platform 
you are looking for


http://dev.mysql.com/downloads/mysql/
http://downloads.mysql.com/archives/

You can also review the bug reports to see if this is something we 
already fixed and in which release we fixed it

http://bugs.mysql.com/

Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Data masking for mysql

2014-04-16 Thread shawn l.green

Hello Reena,

On 4/16/2014 2:57 AM, reena.kam...@jktech.com wrote:

Client never give production db with sensitive data to oursourced dev team. But 
outsourced testing team need a clone of production db for testing. For that 
client can give a copy of production db with masked sensitive data.
That's why data masking tool required.

-Original Message-
From: Jigal van Hemert ji...@xs4all.nl
Sent: Wednesday, 16 April, 2014 11:56am
To: mysql@lists.mysql.com
Subject: Re: Data masking for mysql

Hi,

On 15-4-2014 18:42, Peter Brawley wrote:

On 2014-04-15 5:37 AM, reena.kam...@jktech.com wrote:

It can be done by data masking tool itself. Its one time activity, I
do not need it again  again.


Rilly? If that's so, the data will never be accessed.


I'm starting to think that a concept has been made that includes a
database with the original data, a copy with the masked data and then
there just needs to be a tool that copies the data and modifies
(masks) some fields. Whatever solution we come up with (views, db copy
with an update query that modifies the data, ...) it will not be
accepted unless it fits the original concept.
Most likely the client came up with the concept and then this outsourced
development team doesn't dare to suggest that a different concept is
probably a better way to reach the goal. But, I may be wrong here :-)



The only tool you need is a well-formed query or set of queries and 
perhaps a function or two (either one of the built-in functions or one 
you construct yourself)


Instead of executing the direct query

SELECT field1, sensitiveField, field3 FROM ...

You run some manipulation on the `sensitiveField` field like this

SELECT field1, CONCAT('',right(sensitiveField,4)), field3 
FROM ... 


http://dev.mysql.com/doc/refman/5.6/en/string-functions.html


If you don't like that particular transformation, how about a hashing or 
encryption function?

http://dev.mysql.com/doc/refman/5.6/en/encryption-functions.html

The choice of how to obscure their data is up to the provider of the 
data. They choose which fields they want to hide and how they want to 
hide them. Then, they write very simple queries to create the set of 
data they want you to see.  These queries can be used to build views or 
can be used to send the data into a copy of the table or can be used to 
output the data directly to file. These are simple choices.


There is no 'tool' to do this for you. It's a very basic set of queries 
that any competent DBA should be able to provide. If neither your 
programmers and DBAs nor their programmers and DBAs can write this type 
query, you should all seriously question your abilities to be working 
with data and look into improving your skills immediately.


Regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-07 Thread shawn l.green

Hello Tim,

On 4/4/2014 10:27 PM, Tim Johnson wrote:

* Tim Johnson t...@akwebsoft.com [140404 17:46]:

Currently I'm running mysql on a Mac OSX partition.

I have installed an ubuntu dual-booted partition and put mysql on
it. I have already set up a mysql user on the ubuntu OS.

In the past I have used mysqldump with just the --all-databases
option to transfer data across different linux partitions.

I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?

thoughts? Opinions?
thanks

   I should add the following:

   1)the only user added to the new partition is the same as the
   primary non-root user on the Mac partition. Same credentials

   2)this is a workstation - it is closed to the outside world.

   FYI: ...



There are several ways to select which data you want in the backup. You 
can backup per-table, per-database, object type per database (routines, 
triggers), or global objects (events). What level of detail you want to 
copy from your old instance into your new instance is completely up to you.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



  1   2   >