Re: checking for repetitive queries

2015-05-20 Thread Mogens Melander
https://www.thomas-krenn.com/en/wiki/MySQL_Performance_Tuning#Tuning_Tools

On Mon, May 18, 2015 23:44, Reindl Harald wrote:

 Am 18.05.2015 um 23:37 schrieb Steve Quezadas:
 I want to make sure my caching system is working properly and I want
 to make sure my mysql server isnt being held up by repetitive queries
 (ie like the side products table that appears on every web page).
 I'm pretty sure I cached the site pretty well, but want to make sure
 that I didn't miss anything.

 Is there some sort of tool that allows me to check for repetitive
 queries?

 likely nobody knows what you are talking about
 http://www.catb.org/esr/faqs/smart-questions.html#beprecise





-- 
Mogens Melander
+66 8701 33224


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: checking for repetitive queries

2015-05-18 Thread Andrew Moore
pt-query-digest will help here. You can use a slow query log or tcpdump to
track what the application is asking of the db. The report provided by the
tool can be per db if you wish or the complete schema. It will provide a
summary of the collection and you'll see how often yo

On Mon, May 18, 2015 at 10:37 PM, Steve Quezadas st...@modelprinting.com
wrote:

 I want to make sure my caching system is working properly and I want
 to make sure my mysql server isnt being held up by repetitive queries
 (ie like the side products table that appears on every web page).
 I'm pretty sure I cached the site pretty well, but want to make sure
 that I didn't miss anything.

 Is there some sort of tool that allows me to check for repetitive queries?

 - Steve

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: checking for repetitive queries

2015-05-18 Thread Andrew Moore
cut short on the previous message but I'm sure you get the gist.

A

On Mon, May 18, 2015 at 11:25 PM, Andrew Moore eroomy...@gmail.com wrote:

 pt-query-digest will help here. You can use a slow query log or tcpdump to
 track what the application is asking of the db. The report provided by the
 tool can be per db if you wish or the complete schema. It will provide a
 summary of the collection and you'll see how often yo

 On Mon, May 18, 2015 at 10:37 PM, Steve Quezadas st...@modelprinting.com
 wrote:

 I want to make sure my caching system is working properly and I want
 to make sure my mysql server isnt being held up by repetitive queries
 (ie like the side products table that appears on every web page).
 I'm pretty sure I cached the site pretty well, but want to make sure
 that I didn't miss anything.

 Is there some sort of tool that allows me to check for repetitive queries?

 - Steve

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql





Re: checking for repetitive queries

2015-05-18 Thread Reindl Harald


Am 18.05.2015 um 23:37 schrieb Steve Quezadas:

I want to make sure my caching system is working properly and I want
to make sure my mysql server isnt being held up by repetitive queries
(ie like the side products table that appears on every web page).
I'm pretty sure I cached the site pretty well, but want to make sure
that I didn't miss anything.

Is there some sort of tool that allows me to check for repetitive queries?


likely nobody knows what you are talking about
http://www.catb.org/esr/faqs/smart-questions.html#beprecise




signature.asc
Description: OpenPGP digital signature


checking for repetitive queries

2015-05-18 Thread Steve Quezadas

I want to make sure my caching system is working properly and I want
to make sure my mysql server isnt being held up by repetitive queries
(ie like the side products table that appears on every web page).
I'm pretty sure I cached the site pretty well, but want to make sure
that I didn't miss anything.

Is there some sort of tool that allows me to check for repetitive queries?

- Steve

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Simplifying Queries

2014-07-20 Thread shawn l.green

Hello Surya,

Part of the problem may be that you are so focused on the details that 
might have lost sight of the purpose.


On 7/12/2014 8:24 AM, Surya Savarika wrote:

Hi,
I have two query series that I wonder whether they can be compacted
into a single query:

FIRST QUERY SERIES

   cursor.execute(select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
  where b.BooksDataID!=b.BooksDataID2 and b.ReligionsID=%s, 
(rel_id,))


Are you trying to find the names of any supersets that contain any book 
that has a certain ReligionsID value? (list1)



   tmp = cursor.fetchall()
   cursor.execute(select d.ID from books_data as d join books as
b on d.ID=b.BooksDataID2 join books_compilations as c
  on d.ID=c.BooksDataID where b.ReligionsID=%s, (rel_id,))



Are you trying to find a list of compilations that contain any books 
that has a certain ReligionsID value? (list2)




   junk_ids = [itm[0] for itm in cursor]
   poss_books_data = []
   for id, name, ss_id in tmp:
 if id not in junk_ids:
   poss_books_data.append([id, name, ss_id])



This seems to be a process by which you determine if there are any books 
in list 1 (the first query) that are not in list 2 (the second query).


Did I understand that correctly?



SECOND QUERY SERIES

   cursor.execute(select ReligionsID from books where
BooksDataID=%s, (tmp_ids[0],))
   rel_id = cursor.fetchone()[0] # The first entry will always give
the correct value


Determine the ReligionsID for a particular book.


   cursor.execute(select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
  where b.BooksDataID!=b.BooksDataID2 and b.ReligionsID=%s, 
(rel_id,))


Find all the related books that share the same ReligionsID value.



   tmp = cursor.fetchall()
   cursor.execute(select d.ID from books_data as d join books as
b on d.ID=b.BooksDataID2 join books_compilations as c
  on d.ID=c.BooksDataID where b.ReligionsID=%s, (rel_id,))



Find any compilations that contain the same ReligionsID value.

Did I decode those questions properly?


I don't know that they're necessary, but here are the table definitions:

mysql describe books;
+-++--+-+-++
| Field   | Type   | Null | Key | Default | Extra  |
+-++--+-+-++
| ID  | int(11)| NO   | PRI | NULL| auto_increment |
| ReligionsID | int(11)| NO   | MUL | NULL||
| PrimaryReligion | tinyint(1) | YES  | | 0   ||
| BooksDataID | int(11)| NO   | | NULL||
| BooksDataID2| int(11)| YES  | | NULL||
| SupersetID  | int(11)| YES  | | NULL||
+-++--+-+-++
6 rows in set (0.09 sec)

mysql describe books_data;
++--
---+--+-+-+-
---+
| Field  | Type
| Null | Key | Default |
Extra  |
++--
---+--+-+-+-
---+
| ID | int(11)
| NO   | PRI | NULL|
auto_increment |
| Name   | varchar(30)
| NO   | | NULL|
|
| Label  | enum('Cannonised','Uncannonised, fully accepted','Uncannonised, \
n not fully accepted','Uncannonised, controversial') | NO   | | NULL
 ||
| PrimaryKey | tinyint(1)
| YES  | | 0   |
|
++--
---+--+-+-+-
---+
4 rows in set (0.13 sec)

mysql describe books_compilations;
+-+-+--+-+-++
| Field   | Type| Null | Key | Default | Extra  |
+-+-+--+-+-++
| ID  | int(11) | NO   | PRI | NULL| auto_increment |
| Name| varchar(30) | NO   | | NULL||
| SupersetID  | int(11) | NO   | | NULL||
| BooksDataID | int(11) | NO   | | NULL||
+-+-+--+-+-++
4 rows in set (0.20 sec)



If you can verify that I have correctly stated 

Simplifying Queries

2014-07-12 Thread Surya Savarika
Hi,
I have two query series that I wonder whether they can be compacted
into a single query:

FIRST QUERY SERIES

  cursor.execute(select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
  where b.BooksDataID!=b.BooksDataID2 and b.ReligionsID=%s, 
(rel_id,))
  tmp = cursor.fetchall()
  cursor.execute(select d.ID from books_data as d join books as
b on d.ID=b.BooksDataID2 join books_compilations as c
  on d.ID=c.BooksDataID where b.ReligionsID=%s, (rel_id,))
  junk_ids = [itm[0] for itm in cursor]
  poss_books_data = []
  for id, name, ss_id in tmp:
if id not in junk_ids:
  poss_books_data.append([id, name, ss_id])

SECOND QUERY SERIES

  cursor.execute(select ReligionsID from books where
BooksDataID=%s, (tmp_ids[0],))
  rel_id = cursor.fetchone()[0] # The first entry will always give
the correct value
  cursor.execute(select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
  where b.BooksDataID!=b.BooksDataID2 and b.ReligionsID=%s, 
(rel_id,))
  tmp = cursor.fetchall()
  cursor.execute(select d.ID from books_data as d join books as
b on d.ID=b.BooksDataID2 join books_compilations as c
  on d.ID=c.BooksDataID where b.ReligionsID=%s, (rel_id,))

I don't know that they're necessary, but here are the table definitions:

mysql describe books;
+-++--+-+-++
| Field   | Type   | Null | Key | Default | Extra  |
+-++--+-+-++
| ID  | int(11)| NO   | PRI | NULL| auto_increment |
| ReligionsID | int(11)| NO   | MUL | NULL||
| PrimaryReligion | tinyint(1) | YES  | | 0   ||
| BooksDataID | int(11)| NO   | | NULL||
| BooksDataID2| int(11)| YES  | | NULL||
| SupersetID  | int(11)| YES  | | NULL||
+-++--+-+-++
6 rows in set (0.09 sec)

mysql describe books_data;
++--
---+--+-+-+-
---+
| Field  | Type
   | Null | Key | Default |
Extra  |
++--
---+--+-+-+-
---+
| ID | int(11)
   | NO   | PRI | NULL|
auto_increment |
| Name   | varchar(30)
   | NO   | | NULL|
   |
| Label  | enum('Cannonised','Uncannonised, fully accepted','Uncannonised, \
n not fully accepted','Uncannonised, controversial') | NO   | | NULL
||
| PrimaryKey | tinyint(1)
   | YES  | | 0   |
   |
++--
---+--+-+-+-
---+
4 rows in set (0.13 sec)

mysql describe books_compilations;
+-+-+--+-+-++
| Field   | Type| Null | Key | Default | Extra  |
+-+-+--+-+-++
| ID  | int(11) | NO   | PRI | NULL| auto_increment |
| Name| varchar(30) | NO   | | NULL||
| SupersetID  | int(11) | NO   | | NULL||
| BooksDataID | int(11) | NO   | | NULL||
+-+-+--+-+-++
4 rows in set (0.20 sec)


TIA,
Savi

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Problem with INSERT INTO and UPDATE queries

2014-06-25 Thread Antonio Fernández Pérez
​Hi list,

I have some problems with INSERT INTO and UPDATE queries on a big table.
Let me put the code and explain it ...

I have copied the create code of the table. This table has more than
1500 rows.

​Create Table: CREATE TABLE `radacct` (
  `RadAcctId` bigint(21) NOT NULL AUTO_INCREMENT,
  `AcctSessionId` varchar(32) NOT NULL DEFAULT '',
  `AcctUniqueId` varchar(32) NOT NULL DEFAULT '',
  `UserName` varchar(64) NOT NULL DEFAULT '',
  `Realm` varchar(64) DEFAULT '',
  `NASIPAddress` varchar(15) NOT NULL DEFAULT '',
  `NASPortId` varchar(15) DEFAULT NULL,
  `NASPortType` varchar(32) DEFAULT NULL,
  `AcctStartTime` datetime NOT NULL DEFAULT '-00-00 00:00:00',
  `AcctStopTime` datetime NOT NULL DEFAULT '-00-00 00:00:00',
  `AcctSessionTime` int(12) DEFAULT NULL,
  `AcctAuthentic` varchar(32) DEFAULT NULL,
  `ConnectInfo_start` varchar(50) DEFAULT NULL,
  `ConnectInfo_stop` varchar(50) DEFAULT NULL,
  `AcctInputOctets` bigint(20) DEFAULT NULL,
  `AcctOutputOctets` bigint(20) DEFAULT NULL,
  `CalledStationId` varchar(50) NOT NULL DEFAULT '',
  `CallingStationId` varchar(50) NOT NULL DEFAULT '',
  `AcctTerminateCause` varchar(32) NOT NULL DEFAULT '',
  `ServiceType` varchar(32) DEFAULT NULL,
  `FramedProtocol` varchar(32) DEFAULT NULL,
  `FramedIPAddress` varchar(15) NOT NULL DEFAULT '',
  `AcctStartDelay` int(12) DEFAULT NULL,
  `AcctStopDelay` int(12) DEFAULT NULL,
  `XAscendSessionSvrKey` varchar(10) DEFAULT NULL,
  PRIMARY KEY (`RadAcctId`),
  KEY `user_start` (`UserName`,`AcctStartTime`),
  KEY `nasip_starttime` (`NASIPAddress`,`AcctStartTime`),
  KEY `stop_nasip_start` (`AcctStopTime`,`NASIPAddress`,`AcctStartTime`),
  KEY `acctsesid_user_nasip` (`UserName`,`AcctSessionId`,`NASIPAddress`),
  KEY `user_stop` (`UserName`,`AcctStopTime`)
) ENGINE=InnoDB AUTO_INCREMENT=17694651 DEFAULT CHARSET=utf8

###

The next text shows the entries in mysql-slow.log.

###

# Time: 140625  9:37:45
# User@Host: radius[radius] @  [192.168.0.30]
# Thread_id: 94892163  Schema: radius  Last_errno: 0  Killed: 0
# Query_time: 2.327159  Lock_time: 0.86  Rows_sent: 0  Rows_examined:
0  Rows_affected: 1  Rows_read: 0
# Bytes_sent: 19
use radius;
SET timestamp=1403681865;
INSERT INTO radacct (acctsessionid,acctuniqueid,
username,  realm,nasipaddress,
nasportid,  nasporttype,  acctstarttime,
acctstoptime,  acctsessiontime,  acctau
thentic,connectinfo_start,  connectinfo_stop,
acctinputoctets,  acctoutputoctets,  calledstationid,
callingstationid, acctterminatecause,  servicetype,
framedprotocol,   framedipaddress,
   acctstartdelay,   acctstopdelay,xascendsessionsvrkey)
VALUES ('80004ef0', '78d3fc2661258da5',
'zu629LAYUT',  '', '178.136.71.251', '2147503856',
'Wireless-802.11', '2014
-06-25 09:37:26', '-00-00 00:00:00',  '0', '',
'',  '', '0', '0',  'tururu', '00-00-11-11-11-11',
'',  '', '', '178.136.71.1',  '0', '0', '');
# User@Host: radius[radius] @  [192.168.0.31]
# Thread_id: 97905294  Schema: radius  Last_errno: 0  Killed: 0
# Query_time: 2.397604  Lock_time: 0.62  Rows_sent: 0  Rows_examined:
1  Rows_affected: 1  Rows_read: 1
# Bytes_sent: 52
SET timestamp=1403681865;
UPDATE radacct   SET  framedipaddress =
'182.138.214.240',  acctsessiontime = '4199',
acctinputoctets = '0'   32 |
'12327909',  acctoutputo
ctets= '0'  32 |
'294177486'   WHERE acctsessionid = '805063b1'   AND
username= 'fa239DADUX'   AND nasipaddress=
'182.138.214.50';

###

The previous query is converted because I want to use EXPLAIN ...

###

SELECT framedipaddress = '172.21.13.152',  acctsessiontime
= '4199',  acctinputoctets = '0'   32
|'12327909',  acctoutputo
ctets= '0'  32 |
'294177486'
FROM radacct
WHERE acctsessionid = '805063b1'   AND username=
'fa239DADUX'   AND nasipaddress= '192.168.254.10';

++-+-+--+---+--+-+---+--+-+
| id | select_type | table   | type |
possible_keys |
key  | key_len | ref   | rows | Extra   |
++-+-+--+---+--+-+---+--+-+
|  1 | SIMPLE  | radacct

Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-03 Thread Johan De Meersman

- Original Message -
 From: Johan De Meersman vegiv...@tuxera.be
 Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
 
 In any case, this is nothing that can be fixed on the database level.

I may or may not have to swallow that :-p

I've been hammering a munin plugin that graphs schema sizes (yay capacity 
management); and obviously it's dragged down by an information_schema query. I 
stumbled upon this, which may or may not be helpful for your situation, too: 
http://www.mysqlperformanceblog.com/2011/12/23/solving-information_schema-slowness/

No guarantees, but it doesn't seem to have any significant downsides.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-03 Thread Jatin Davey

On 6/3/2014 4:47 PM, Johan De Meersman wrote:

- Original Message -

From: Johan De Meersman vegiv...@tuxera.be
Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU

In any case, this is nothing that can be fixed on the database level.

I may or may not have to swallow that :-p

I've been hammering a munin plugin that graphs schema sizes (yay capacity 
management); and obviously it's dragged down by an information_schema query. I 
stumbled upon this, which may or may not be helpful for your situation, too: 
http://www.mysqlperformanceblog.com/2011/12/23/solving-information_schema-slowness/

No guarantees, but it doesn't seem to have any significant downsides.


I already have this variable turned off in my DB. It does not stop the 
SHOW FULL COLUMN queries from hitting the DB.


| innodb_stats_on_metadata   | OFF

Thanks
Jatin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-03 Thread Jatin Davey

On 6/2/2014 9:10 PM, Morgan Tocker wrote:

Hi Jatin,

On Jun 2, 2014, at 7:56 AM, Jatin Davey jasho...@cisco.com wrote:


I found this blog while trying to find a fix for my problem. Thought it might 
be useful to share.

Here it is:

http://whirlpool.net.au/blog/5

To confirm - did you manage to fix your problem?

I was about to comment that it looks like queries generated by an ORM or 
connector.  It looks like from your version string you have an MySQL 
enterprise, may I suggest creating a ticket with support?

Regarding your most recent reply:


All the SHOW FULL COLUMN queries that we do on the respective tables are very 
small tables. They hardly cross 50 rows. Hence that is the reason whenever these queries 
are made i can see high cpu usage in %user_time. If it were very large tables then the 
cpu would be spending lot of time in waiting for I/O as the databases reads would have to 
do a lot of I/O to read the data from the disks.

If it helps - I believe performance should be similar with large tables, since 
in the case of big table or small table, what is accessed here should just be 
meta data.

Earlier versions of MySQL could rebuild InnoDB statistics on some SHOW commands 
(I’m not sure about SHOW FULL COLUMN), but this is now disabled by default:
http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_stats_on_metadata

- Morgan

Hi Morgan

I think you may be correct. It could be an issue with the connector. I 
saw the code of the mysql Connector/J and found the getColumns() method 
building a stringBuffer object which evetually runs this SHOW FULL 
COLUMNS queries. I have to look at my code where ever this method is 
called and make sure we dont do it. But i have not nailed this down. It 
is still just an assumption that i have.


Another point is that i have the innodb_stats_on_metadata turned OFF

Thanks for the response.

Thanks
Jatin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Jatin Davey

Hi All

I am no expert with mysql and databases. Hence seeking out some help on 
this forum.


Basically i got a query dump of my application during its operation. I 
had collected the queries for about 4 hours. Ran some scripts on the 
number of queries being sent to the databases.


The query file was a whopping 4 GB is size. Upon analyzing the queries i 
found that there were a total of 30 million queries made to the Database 
out of which 10 million queries were only doing SHOW FULL COLUMN queries.


The SHOW FULL COLUMN queries were of the format as below:

SHOW FULL COLUMNS FROM `APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM 
`db_private_admin` LIKE '%'


This is causing considerable cpu usage in %user_time in my system. As a 
result it is killing my system and i am unable to scale for more load. I 
looked if there is any code in my app that is doing these queries and 
found nothing. Could some one let me know how i can get rid of these 
queries being sent to the DB.


I am using mysql server version: 5.6.10-enterprise-commercial-advanced

Please let me know if any more information is required.

Appreciate your help in this regard.

Thanks
Jatin


Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Reindl Harald


Am 02.06.2014 15:35, schrieb Jatin Davey:
 I am no expert with mysql and databases. Hence seeking out some help on this 
 forum.
 
 Basically i got a query dump of my application during its operation. I had 
 collected the queries for about 4 hours.
 Ran some scripts on the number of queries being sent to the databases.
 
 The query file was a whopping 4 GB is size. Upon analyzing the queries i 
 found that there were a total of 30
 million queries made to the Database out of which 10 million queries were 
 only doing SHOW FULL COLUMN queries.
 
 The SHOW FULL COLUMN queries were of the format as below:
 
 SHOW FULL COLUMNS FROM `APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM 
 `db_private_admin` LIKE '%'
 
 This is causing considerable cpu usage in %user_time in my system

fix your application - there is no single reason to run such
queries 10 million times because the result won't change all
the time

and avoid like in general



signature.asc
Description: OpenPGP digital signature


Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Jatin Davey

On 6/2/2014 7:18 PM, Reindl Harald wrote:


Am 02.06.2014 15:35, schrieb Jatin Davey:

I am no expert with mysql and databases. Hence seeking out some help on this 
forum.

Basically i got a query dump of my application during its operation. I had 
collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to the databases.

The query file was a whopping 4 GB is size. Upon analyzing the queries i found 
that there were a total of 30
million queries made to the Database out of which 10 million queries were only doing 
SHOW FULL COLUMN queries.

The SHOW FULL COLUMN queries were of the format as below:

SHOW FULL COLUMNS FROM `APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM 
`db_private_admin` LIKE '%'

This is causing considerable cpu usage in %user_time in my system

fix your application - there is no single reason to run such
queries 10 million times because the result won't change all
the time

and avoid like in general


Our application does not send such queries to the DB. I have searched 
through my entire code and we dont run such queries. It has something to 
do with a layer below our application. But i am not sure as to where it is.


Thanks
Jatin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Michael Dykman
The advice to 'avoid LIKE in general' is a little strong.  LIKE is
very useful and does not always cause inefficient queries, although
the possibility is there.

However, there is one form which must be avoided at all costs: the one
where the glob-text matcher is the first character in that string.
LIKE '%'  or LIKE '%foobar'  are both equally costly as, no matter
what index there might be on that column, the query will have to visit
every single row to test the match, therefore inducing a full table
scan.  putting it early in the expression is equally dangerous, but
how dangerous depends on how much data you have:  LIKE 'a%' avoids
visiting every row but it still has to test against a significant
subset of all rows:  If you have 100 million rows, this will still
cause your query to visit a very large number of them.

So, I would have to ask: how many records are in that table? How many
columns?  is it a table or a view?

On Mon, Jun 2, 2014 at 10:21 AM, Jatin Davey jasho...@cisco.com wrote:
 On 6/2/2014 7:18 PM, Reindl Harald wrote:


 Am 02.06.2014 15:35, schrieb Jatin Davey:

 I am no expert with mysql and databases. Hence seeking out some help on
 this forum.

 Basically i got a query dump of my application during its operation. I
 had collected the queries for about 4 hours.
 Ran some scripts on the number of queries being sent to the databases.

 The query file was a whopping 4 GB is size. Upon analyzing the queries i
 found that there were a total of 30
 million queries made to the Database out of which 10 million queries were
 only doing SHOW FULL COLUMN queries.

 The SHOW FULL COLUMN queries were of the format as below:

 SHOW FULL COLUMNS FROM `APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM
 `db_private_admin` LIKE '%'

 This is causing considerable cpu usage in %user_time in my system

 fix your application - there is no single reason to run such
 queries 10 million times because the result won't change all
 the time

 and avoid like in general


 Our application does not send such queries to the DB. I have searched
 through my entire code and we dont run such queries. It has something to do
 with a layer below our application. But i am not sure as to where it is.

 Thanks
 Jatin


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Singer Wang
What's your technology stack?


On Mon, Jun 2, 2014 at 10:21 AM, Jatin Davey jasho...@cisco.com wrote:

 On 6/2/2014 7:18 PM, Reindl Harald wrote:


 Am 02.06.2014 15:35, schrieb Jatin Davey:

 I am no expert with mysql and databases. Hence seeking out some help on
 this forum.

 Basically i got a query dump of my application during its operation. I
 had collected the queries for about 4 hours.
 Ran some scripts on the number of queries being sent to the databases.

 The query file was a whopping 4 GB is size. Upon analyzing the queries i
 found that there were a total of 30
 million queries made to the Database out of which 10 million queries
 were only doing SHOW FULL COLUMN queries.

 The SHOW FULL COLUMN queries were of the format as below:

 SHOW FULL COLUMNS FROM `APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM
 `db_private_admin` LIKE '%'

 This is causing considerable cpu usage in %user_time in my system

 fix your application - there is no single reason to run such
 queries 10 million times because the result won't change all
 the time

 and avoid like in general


 Our application does not send such queries to the DB. I have searched
 through my entire code and we dont run such queries. It has something to do
 with a layer below our application. But i am not sure as to where it is.

 Thanks
 Jatin


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Jatin Davey
Certain part of our code uses DataNucleas while other parts of the code 
use JDBC to access the DB.


I would say that 70% of our code uses DataNucleas while the remaining 
30% of our code uses JDBC for databases related operations.


Thanks
Jatin

On 6/2/2014 8:17 PM, Singer Wang wrote:

What's your technology stack?


On Mon, Jun 2, 2014 at 10:21 AM, Jatin Davey jasho...@cisco.com 
mailto:jasho...@cisco.com wrote:


On 6/2/2014 7:18 PM, Reindl Harald wrote:


Am 02.06.2014 15:35, schrieb Jatin Davey:

I am no expert with mysql and databases. Hence seeking out
some help on this forum.

Basically i got a query dump of my application during its
operation. I had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to
the databases.

The query file was a whopping 4 GB is size. Upon analyzing
the queries i found that there were a total of 30
million queries made to the Database out of which 10
million queries were only doing SHOW FULL COLUMN queries.

The SHOW FULL COLUMN queries were of the format as below:

SHOW FULL COLUMNS FROM
`APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM
`db_private_admin` LIKE '%'

This is causing considerable cpu usage in %user_time in my
system

fix your application - there is no single reason to run such
queries 10 million times because the result won't change all
the time

and avoid like in general


Our application does not send such queries to the DB. I have
searched through my entire code and we dont run such queries. It
has something to do with a layer below our application. But i am
not sure as to where it is.

Thanks
Jatin


-- 
MySQL General Mailing List

For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql






Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Jatin Davey
All the SHOW FULL COLUMN queries that we do on the respective tables 
are very small tables. They hardly cross 50 rows. Hence that is the 
reason whenever these queries are made i can see high cpu usage in 
%user_time. If it were very large tables then the cpu would be spending 
lot of time in waiting for I/O as the databases reads would have to do a 
lot of I/O to read the data from the disks.


Thanks
Jatin

On 6/2/2014 8:13 PM, Michael Dykman wrote:

The advice to 'avoid LIKE in general' is a little strong.  LIKE is
very useful and does not always cause inefficient queries, although
the possibility is there.

However, there is one form which must be avoided at all costs: the one
where the glob-text matcher is the first character in that string.
LIKE '%'  or LIKE '%foobar'  are both equally costly as, no matter
what index there might be on that column, the query will have to visit
every single row to test the match, therefore inducing a full table
scan.  putting it early in the expression is equally dangerous, but
how dangerous depends on how much data you have:  LIKE 'a%' avoids
visiting every row but it still has to test against a significant
subset of all rows:  If you have 100 million rows, this will still
cause your query to visit a very large number of them.

So, I would have to ask: how many records are in that table? How many
columns?  is it a table or a view?

On Mon, Jun 2, 2014 at 10:21 AM, Jatin Davey jasho...@cisco.com wrote:

On 6/2/2014 7:18 PM, Reindl Harald wrote:


Am 02.06.2014 15:35, schrieb Jatin Davey:

I am no expert with mysql and databases. Hence seeking out some help on
this forum.

Basically i got a query dump of my application during its operation. I
had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to the databases.

The query file was a whopping 4 GB is size. Upon analyzing the queries i
found that there were a total of 30
million queries made to the Database out of which 10 million queries were
only doing SHOW FULL COLUMN queries.

The SHOW FULL COLUMN queries were of the format as below:

SHOW FULL COLUMNS FROM `APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM
`db_private_admin` LIKE '%'

This is causing considerable cpu usage in %user_time in my system

fix your application - there is no single reason to run such
queries 10 million times because the result won't change all
the time

and avoid like in general


Our application does not send such queries to the DB. I have searched
through my entire code and we dont run such queries. It has something to do
with a layer below our application. But i am not sure as to where it is.

Thanks
Jatin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql








Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Jatin Davey
I found this blog while trying to find a fix for my problem. Thought it 
might be useful to share.


Here it is:

http://whirlpool.net.au/blog/5

Thanks
Jatin

On 6/2/2014 8:22 PM, Jatin Davey wrote:
Certain part of our code uses DataNucleas while other parts of the 
code use JDBC to access the DB.


I would say that 70% of our code uses DataNucleas while the remaining 
30% of our code uses JDBC for databases related operations.


Thanks
Jatin

On 6/2/2014 8:17 PM, Singer Wang wrote:

What's your technology stack?


On Mon, Jun 2, 2014 at 10:21 AM, Jatin Davey jasho...@cisco.com 
mailto:jasho...@cisco.com wrote:


On 6/2/2014 7:18 PM, Reindl Harald wrote:


Am 02.06.2014 15:35, schrieb Jatin Davey:

I am no expert with mysql and databases. Hence seeking out
some help on this forum.

Basically i got a query dump of my application during its
operation. I had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to
the databases.

The query file was a whopping 4 GB is size. Upon analyzing
the queries i found that there were a total of 30
million queries made to the Database out of which 10
million queries were only doing SHOW FULL COLUMN queries.

The SHOW FULL COLUMN queries were of the format as below:

SHOW FULL COLUMNS FROM
`APIC_PHY_FABRIC_PHYSICAL_INTERFACE` FROM
`db_private_admin` LIKE '%'

This is causing considerable cpu usage in %user_time in my
system

fix your application - there is no single reason to run such
queries 10 million times because the result won't change all
the time

and avoid like in general


Our application does not send such queries to the DB. I have
searched through my entire code and we dont run such queries. It
has something to do with a layer below our application. But i am
not sure as to where it is.

Thanks
Jatin


-- MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql









Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Morgan Tocker
Hi Jatin,

On Jun 2, 2014, at 7:56 AM, Jatin Davey jasho...@cisco.com wrote:

 I found this blog while trying to find a fix for my problem. Thought it might 
 be useful to share.
 
 Here it is:
 
 http://whirlpool.net.au/blog/5

To confirm - did you manage to fix your problem?

I was about to comment that it looks like queries generated by an ORM or 
connector.  It looks like from your version string you have an MySQL 
enterprise, may I suggest creating a ticket with support?

Regarding your most recent reply:

 All the SHOW FULL COLUMN queries that we do on the respective tables are 
 very small tables. They hardly cross 50 rows. Hence that is the reason 
 whenever these queries are made i can see high cpu usage in %user_time. If it 
 were very large tables then the cpu would be spending lot of time in waiting 
 for I/O as the databases reads would have to do a lot of I/O to read the data 
 from the disks.

If it helps - I believe performance should be similar with large tables, since 
in the case of big table or small table, what is accessed here should just be 
meta data.

Earlier versions of MySQL could rebuild InnoDB statistics on some SHOW commands 
(I’m not sure about SHOW FULL COLUMN), but this is now disabled by default:
http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_stats_on_metadata

- Morgan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SHOW FULL COLUMNS QUERIES hogging my CPU

2014-06-02 Thread Johan De Meersman
- Original Message -
 From: Jatin Davey jasho...@cisco.com
 Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
 
 Certain part of our code uses DataNucleas while other parts of the code

A data persistence product... there's your problem.

Persisting objects into a relational database is like fitting a square peg in a 
round hole: it works if you have a big enough hammer, but something is going to 
break.

I'm not going to go into the myriad ways such layers add inefficiencies that 
only get worse with scale; but I suggest you log a ticket with your vendor; 
they're bound to have encountered the problem before. There's probably some 
setting that caches the db metadata instead of requesting it over and over 
again.

In any case, this is nothing that can be fixed on the database level.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Are There Slow Queries that Don't Show in the Slow Query Logs?

2013-05-31 Thread Robinson, Eric
 -Original Message-
 From: Vikas Shukla [mailto:myfriendvi...@gmail.com] 
 Sent: Thursday, May 30, 2013 7:19 PM
 To: Robinson, Eric; mysql@lists.mysql.com
 Subject: RE: Are There Slow Queries that Don't Show in the 
 Slow Query Logs?
 
 Hi,
 
 No, it does not represents the time from request to response 
 not does it includes the time that is spent in waiting for 
 the locks to be released.
 The slow query log consists of SQL statements that took more 
 than long_query_time seconds to EXECUTE. The default value of 
 long_query_time is 10.
 The time to acquire the initial locks is not counted as 
 execution time.
 mysqld writes a statement to the slow query log after it has 
 been executed and after all locks have been released, so log 
 order might differ from execution order.
 
 Lets take an example, if a query is received at 10:00 hrs and 
 it waits till 10:05 hrs , it starts getting executed at 
 10:05:00 and completed at 10:05:24 (HH:MM:SS). So, here it 
 took 24 seconds to execute. So only
 24 seconds is counted.
 So if long_query_time is equal to 10, which is by default, 
 this would be logged in slow query log as it takes more than 
 10 seconds to execute.
 
 Sent from my Windows Phone From: Robinson, Eric
 Sent: 31-05-2013 03:48
 To: mysql@lists.mysql.com
 Subject: Are There Slow Queries that Don't Show in the Slow 
 Query Logs?
 As everyone knows, with MyISAM, queries and inserts can lock 
 tables and force other queries to wait in a queue. When that 
 happens, does the time shown in the slow query logs represent 
 the whole time from when the server received the request to 
 when the response was sent to the client? Or is the time a 
 query spends waiting for a table lock to be released omitted 
 from what is recorded in the slow query logs?
 
 --
 Eric Robinson
 
 

Very good answer, Vikas. Thank you for the clarification!

--Eric




Disclaimer - May 31, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Vikas Shukla,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Are There Slow Queries that Don't Show in the Slow Query Logs?

2013-05-30 Thread Vikas Shukla
Hi,

No, it does not represents the time from request to response not does
it includes the time that is spent in waiting for the locks to be
released.
The slow query log consists of SQL statements that took more than
long_query_time seconds to EXECUTE. The default value of
long_query_time is 10.
The time to acquire the initial locks is not counted as execution time.
mysqld writes a statement to the slow query log after it has been
executed and after all locks have been released, so log order might
differ from execution order.

Lets take an example, if a query is received at 10:00 hrs and it waits
till 10:05 hrs , it starts getting executed at 10:05:00 and completed
at 10:05:24 (HH:MM:SS). So, here it took 24 seconds to execute. So only
24 seconds is counted.
So if long_query_time is equal to 10, which is by default, this would
be logged in slow query log as it takes more than 10 seconds to execute.

Sent from my Windows Phone From: Robinson, Eric
Sent: 31-05-2013 03:48
To: mysql@lists.mysql.com
Subject: Are There Slow Queries that Don't Show in the Slow Query Logs?
As everyone knows, with MyISAM, queries and inserts can lock tables
and force other queries to wait in a queue. When that happens, does
the time shown in the slow query logs represent the whole time from
when the server received the request to when the response was sent to
the client? Or is the time a query spends waiting for a table lock to
be released omitted from what is recorded in the slow query logs?

--
Eric Robinson






Disclaimer - May 30, 2013
This email and any files transmitted with it are confidential and
intended solely for 'mysql@lists.mysql.com'. If you are not the named
addressee you should not disseminate, distribute, copy or alter this
email. Any views or opinions presented in this email are solely those
of the author and might not represent those of Physicians' Managed
Care or Physician Select Management. Warning: Although Physicians'
Managed Care or Physician Select Management has taken reasonable
precautions to ensure no viruses are present in this email, the
company cannot accept responsibility for any loss or damage arising
from the use of this email or attachments.
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Determing number of queries

2013-04-06 Thread Rick James
Richard, there is more to a system than number of queries.
Please post these in a new thread on http://forums.mysql.com/list.php?24 :
   SHOW GLOBAL STATUS;
   SHOW VARIABLES;
   Ram size
I will do some analysis and provide my opinion.

 -Original Message-
 From: Manuel Arostegui [mailto:man...@tuenti.com]
 Sent: Thursday, April 04, 2013 3:20 PM
 To: h...@tbbs.net
 Cc: mysql@lists.mysql.com
 Subject: Re: Determing number of queries
 
 2013/4/4 h...@tbbs.net
 
   2013/04/04 22:40 +0200, Manuel Arostegui 
  You can start with show innodb status;  It is now show engine
  innodb status
 
 
 Yep, sorry, not used to it just yet :-)
 
 
 --
 Manuel Aróstegui
 Systems Team
 tuenti.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Determing number of queries

2013-04-04 Thread Richard Reina
I am looking to spec out hardware for a new database server. I figured
a good starting point would be to find out how much usage my current
server is getting. It just a local machine that runs mysql and is
queried by a few users here in the office. Is there a way that mysql
can tell me info about it's workload?

Thanks

Richard

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Determing number of queries

2013-04-04 Thread Manuel Arostegui
2013/4/4 Richard Reina gatorre...@gmail.com

 I am looking to spec out hardware for a new database server. I figured
 a good starting point would be to find out how much usage my current
 server is getting. It just a local machine that runs mysql and is
 queried by a few users here in the office. Is there a way that mysql
 can tell me info about it's workload?

 Thanks


Hello,

You can start with show innodb status;
For the basics, you might want to start graphing the stats under ROW
OPERATIONS (inserts, updates, reads, deletes...)

Manuel.


Re: Determing number of queries

2013-04-04 Thread hsv
 2013/04/04 22:40 +0200, Manuel Arostegui 
You can start with show innodb status;

It is now
show engine innodb status


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Determing number of queries

2013-04-04 Thread Manuel Arostegui
2013/4/4 h...@tbbs.net

  2013/04/04 22:40 +0200, Manuel Arostegui 
 You can start with show innodb status;
 
 It is now
 show engine innodb status


Yep, sorry, not used to it just yet :-)


-- 
Manuel Aróstegui
Systems Team
tuenti.com


Re: Slow queries / inserts InnoDB

2012-10-09 Thread Andrés Tello
You are forcing mysql to do full table scans with the substr...

Use explain to see that you aren't using any index.

Avoid the use of substr in the where clause, by splitting your data, index
that field and do you query over that field.



That is why your query is so slow.

the slow insert, is due you S.O...


On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno 
aespinosamor...@gmail.com wrote:

 Hi.



 I have developed my first application with VB.NET at work (junior
 developer) and basically it reads several thousands of lines from X number
 of files, and inserts them into a mysql database.



 The application consists of several steps:

 1)  Dump all lines with all fields into a temp table.

 a.   This works fine. Optimal speed for the hardware we have.

 2)  Query temp table to obtain certain. I query by a unique ID. I
 insert all unique ID (isn field). If the ID matches my interests, I insert
 it into an ArrayList.

 a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
 SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
 COUNT(SUBSTR(ISN,2)) = 4

 b.  The isn is not unique per line, but per data (sms)

 c.   Once I have all isn on an arraylist, I do the following query:

i.
  SELECT
  select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
 = '  isn  ' ORDER BY type LIMIT 1

 d.  To obtain some data. During the process I query around 10 times
 other table per ISN.

 e.  Here is the problem. If I have a few files to process (around
 3000-4000 lines in total, small array) this steps work fine, good speed.
 But If I have big files or a lot of files (more than 1 lines in total,
 big array), this steps are incredibly slow. Queries and inserts are too
 slow. Meaning, one-two inserts per second, while the other case inserts are
 around 800 per second.



 Our hardware is not optimized for database server, but I don’t have other
 choice. It is mostly a desktop computer

 Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.



 I have tried some optimizations commented in mysqlperformance blog without
 success.

 Any way to optimize this?



 Thank you very much in advance.





 Adrián Espinosa Moreno.



RE: Slow queries / inserts InnoDB

2012-10-09 Thread Rick James
When creating the temp table, add another column, isn2, that is SUBSTR(ISN, 2).
Add
  INDEX(isn2, type)
Change the query to...
  isn2 = '  isn  ' ORDER BY type LIMIT 1

It that does not work, please provide clearer details, including
SHOW CREATE TABLE
SHOW TABLE STATUS
EXPLAIN SELECT (with substitutions filled in)

 -Original Message-
 From: Andrés Tello [mailto:mr.crip...@gmail.com]
 Sent: Tuesday, October 09, 2012 7:04 AM
 To: Adrián Espinosa Moreno
 Cc: mysql@lists.mysql.com
 Subject: Re: Slow queries / inserts InnoDB
 
 You are forcing mysql to do full table scans with the substr...
 
 Use explain to see that you aren't using any index.
 
 Avoid the use of substr in the where clause, by splitting your data,
 index that field and do you query over that field.
 
 
 
 That is why your query is so slow.
 
 the slow insert, is due you S.O...
 
 
 On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno 
 aespinosamor...@gmail.com wrote:
 
  Hi.
 
 
 
  I have developed my first application with VB.NET at work (junior
  developer) and basically it reads several thousands of lines from X
  number of files, and inserts them into a mysql database.
 
 
 
  The application consists of several steps:
 
  1)  Dump all lines with all fields into a temp table.
 
  a.   This works fine. Optimal speed for the hardware we have.
 
  2)  Query temp table to obtain certain. I query by a unique ID. I
  insert all unique ID (isn field). If the ID matches my interests, I
  insert it into an ArrayList.
 
  a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
  SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
  COUNT(SUBSTR(ISN,2)) = 4
 
  b.  The isn is not unique per line, but per data (sms)
 
  c.   Once I have all isn on an arraylist, I do the following
 query:
 
 i.
   SELECT
   select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE
  SUBSTR(ISN, 2) = '  isn  ' ORDER BY type LIMIT 1
 
  d.  To obtain some data. During the process I query around 10
 times
  other table per ISN.
 
  e.  Here is the problem. If I have a few files to process (around
  3000-4000 lines in total, small array) this steps work fine, good
 speed.
  But If I have big files or a lot of files (more than 1 lines in
  total, big array), this steps are incredibly slow. Queries and
 inserts
  are too slow. Meaning, one-two inserts per second, while the other
  case inserts are around 800 per second.
 
 
 
  Our hardware is not optimized for database server, but I don't have
  other choice. It is mostly a desktop computer
 
  Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.
 
 
 
  I have tried some optimizations commented in mysqlperformance blog
  without success.
 
  Any way to optimize this?
 
 
 
  Thank you very much in advance.
 
 
 
 
 
  Adrián Espinosa Moreno.
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Slow queries / inserts InnoDB

2012-10-09 Thread Cheng Xianming
Tello is right ! Moreno, and I sugest that: if you want query by 
SUBSTR(ISN,2) ,you should create a field named like isnInfo, and 
save SUBSTR(ISN,2) to this filed when you insert . don't forget create a 
index on this field .



于 12-10-9 下午10:04, Andrés Tello 写道:

You are forcing mysql to do full table scans with the substr...

Use explain to see that you aren't using any index.

Avoid the use of substr in the where clause, by splitting your data, index
that field and do you query over that field.



That is why your query is so slow.

the slow insert, is due you S.O...


On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno 
aespinosamor...@gmail.com wrote:


Hi.



I have developed my first application with VB.NET at work (junior
developer) and basically it reads several thousands of lines from X number
of files, and inserts them into a mysql database.



The application consists of several steps:

1)  Dump all lines with all fields into a temp table.

a.   This works fine. Optimal speed for the hardware we have.

2)  Query temp table to obtain certain. I query by a unique ID. I
insert all unique ID (isn field). If the ID matches my interests, I insert
it into an ArrayList.

a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
COUNT(SUBSTR(ISN,2)) = 4

b.  The isn is not unique per line, but per data (sms)

c.   Once I have all isn on an arraylist, I do the following query:

i.
  SELECT
  select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
= '  isn  ' ORDER BY type LIMIT 1

d.  To obtain some data. During the process I query around 10 times
other table per ISN.

e.  Here is the problem. If I have a few files to process (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other case inserts are
around 800 per second.



Our hardware is not optimized for database server, but I don’t have other
choice. It is mostly a desktop computer

Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.



I have tried some optimizations commented in mysqlperformance blog without
success.

Any way to optimize this?



Thank you very much in advance.





Adrián Espinosa Moreno.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Slow queries / inserts InnoDB

2012-10-08 Thread Adrián Espinosa Moreno
Hi.



I have developed my first application with VB.NET at work (junior
developer) and basically it reads several thousands of lines from X number
of files, and inserts them into a mysql database.



The application consists of several steps:

1)  Dump all lines with all fields into a temp table.

a.   This works fine. Optimal speed for the hardware we have.

2)  Query temp table to obtain certain. I query by a unique ID. I
insert all unique ID (isn field). If the ID matches my interests, I insert
it into an ArrayList.

a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
COUNT(SUBSTR(ISN,2)) = 4

b.  The isn is not unique per line, but per data (sms)

c.   Once I have all isn on an arraylist, I do the following query:

   i.  SELECT
 select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
= '  isn  ' ORDER BY type LIMIT 1

d.  To obtain some data. During the process I query around 10 times
other table per ISN.

e.  Here is the problem. If I have a few files to process (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other case inserts are
around 800 per second.



Our hardware is not optimized for database server, but I don’t have other
choice. It is mostly a desktop computer

Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.



I have tried some optimizations commented in mysqlperformance blog without
success.

Any way to optimize this?



Thank you very much in advance.





Adrián Espinosa Moreno.


Which Database when lot of insert / update queries to execute

2012-06-15 Thread abhishek jain
hi,
I am biased on mysql, and hence i am asking this on mysql forum first.
I am designing a solution which will need me to import from CSV, i am using
my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
an hour, for 10 hours  a day.
The Queries will mainly be update but select and insert also at times,
The database size will be estimated to be about 5GB.
I need to know is this a classic case for a NOSQL database or mysql is a
good option.

Also , if i need to do 'group by', on a column on a large table what should
i keep in mind, is it advisable,

Please advice,

-- 
Thanks and kind Regards,
Abhishek jain


Re: Which Database when lot of insert / update queries to execute

2012-06-15 Thread Tsubasa Tanaka
Hello,

 I am designing a solution which will need me to import from CSV, i am using
 my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
 an hour, for 10 hours  a day.
try to use `LOAD DATA INFILE' to import from CSV file.

http://dev.mysql.com/doc/refman/5.5/en/load-data.html


 Also , if i need to do 'group by', on a column on a large table what should
 i keep in mind, is it advisable,

create index on columns used by 'group by' and columns used by
aggregate functions.


regards,

2012/6/15 abhishek jain abhishek.netj...@gmail.com:
 hi,
 I am biased on mysql, and hence i am asking this on mysql forum first.
 I am designing a solution which will need me to import from CSV, i am using
 my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
 an hour, for 10 hours  a day.
 The Queries will mainly be update but select and insert also at times,
 The database size will be estimated to be about 5GB.
 I need to know is this a classic case for a NOSQL database or mysql is a
 good option.

 Also , if i need to do 'group by', on a column on a large table what should
 i keep in mind, is it advisable,

 Please advice,

 --
 Thanks and kind Regards,
 Abhishek jain

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Which Database when lot of insert / update queries to execute

2012-06-15 Thread Rick James
Let's see
SHOW CREATE TABLE ...
SELECT ...

It sounds doable with MySQL; might be too big for NOSQL.

 -Original Message-
 From: abhishek jain [mailto:abhishek.netj...@gmail.com]
 Sent: Friday, June 15, 2012 1:57 AM
 To: mysql@lists.mysql.com
 Subject: Which Database when lot of insert / update queries to execute
 
 hi,
 I am biased on mysql, and hence i am asking this on mysql forum first.
 I am designing a solution which will need me to import from CSV, i am
 using my JAVA code to parse. CSV file has 500K rows, and i need to do
 it thrice an hour, for 10 hours  a day.
 The Queries will mainly be update but select and insert also at times,
 The database size will be estimated to be about 5GB.
 I need to know is this a classic case for a NOSQL database or mysql is
 a good option.
 
 Also , if i need to do 'group by', on a column on a large table what
 should i keep in mind, is it advisable,
 
 Please advice,
 
 --
 Thanks and kind Regards,
 Abhishek jain

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Which Database when lot of insert / update queries to execute

2012-06-15 Thread hsv
 2012/06/15 18:14 +0900, Tsubasa Tanaka 
try to use `LOAD DATA INFILE' to import from CSV file.

http://dev.mysql.com/doc/refman/5.5/en/load-data.html

Try is the operative word: MySQL s character format is _like_ CSV, but not 
the same. The treatment of NULL is doubtless the biggest stumbling-block.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



MySQL 5.1: Views, queries, updates and performance issues

2011-12-29 Thread Bruce Ferrell
Hi all,

I've got some semi-general questions on the topics in the title.  What I'm 
looking for is more in the line of theory than query specifics.  I am but a 
poor peasant boy.

What I have is an application that makes heavy use of views.  If I understand 
views correctly (and I may not), views are representations of queries 
themselves. The guy who wrote
the app chose to do updates and joins against the views instead of against the 
underlying tables themselves.

I've tuned to meet the gross memory requirements and  mysqltuner.pl is saying 
that 45% of the joins are without indexes. With the slow query logs on and 
queries_without_indexes,
I'm frequently seeing updates that often take more that 2 seconds to 
complete... Often MUCH longer (how does 157 seconds grab you?).

So, with that background, what would you do next and is it possible this use of 
views, in this way is a significant contributor to the problem?

Bruce Ferrell

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL 5.1: Views, queries, updates and performance issues

2011-12-29 Thread Arthur Fuller
At first blush, your problem would appear to concern the lack of index-use.
 That's where I would begin my investigation. It might be painstaking, but
I would do something like this:

For each view
Look at the Join(s) and see what columns are being joined
Look at the tables and see what columns are being indexed
If any columns are not indexed that should be, create said index(es)
Next view

Of course, this process could be optimized by looking at the views in terms
of their frequency of use.

Finally, you didn't mention what sort of front end you're using. It's
possible that you might benefit by using stored procedures rather than
views. That switch would of course cost you some time invested in changing
the front end to pass explicit parameters.

Hope this helps,
Arthur

On Thu, Dec 29, 2011 at 12:50 PM, Bruce Ferrell bferr...@baywinds.orgwrote:

 Hi all,

 I've got some semi-general questions on the topics in the title.  What I'm
 looking for is more in the line of theory than query specifics.  I am but a
 poor peasant boy.

 What I have is an application that makes heavy use of views.  If I
 understand views correctly (and I may not), views are representations of
 queries themselves. The guy who wrote
 the app chose to do updates and joins against the views instead of against
 the underlying tables themselves.

 I've tuned to meet the gross memory requirements and  mysqltuner.pl is
 saying that 45% of the joins are without indexes. With the slow query logs
 on and queries_without_indexes,
 I'm frequently seeing updates that often take more that 2 seconds to
 complete... Often MUCH longer (how does 157 seconds grab you?).

 So, with that background, what would you do next and is it possible this
 use of views, in this way is a significant contributor to the problem?

 Bruce Ferrell


-- 
Cell: 647.710.1314

Thirty spokes converge on a hub
but it's the emptiness
that makes a wheel work
   -- from the Daodejing


In general, cheaper to INNER JOIN or two separate queries

2011-10-03 Thread Dotan Cohen
I need two fields from two different tables. I could either run two
queries, or a single INNER JOIN query:

$r1=mysql_query(SELECT fruit FROM fruits WHERE userid  = 1);
$r2=mysql_query(SELECT beer FROM beers WHERE userid  = 1);
--or--
$r=mysql_query(SELECT fruits.fruit, beers.beer FROM fruits INNER JOIN
beers ON fruits.userid = beers.userid WHERE beers.userid  = 1);

In general, which is preferable? I don't have access to the production
machine to benchmark at the moment, but which is best practice?
Thanks!

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



query mystery: union de-optimizes component queries

2011-08-26 Thread Dave Dyer

Why would using UNION cause the subqueries to be de-optimized?

explain
(SELECT count(gamename) as gname ,variation from zertz_gamerecord 
where  (gmtdate  date_sub(current_timestamp(),interval 90 day)) and 
(player1='13213' or player2='13213' ) group by variation limit 3)  

shows using index on gmtdate

explain
(SELECT count(gamename) as gname ,variation from mp_gamerecord 
  where  (gmtdate  date_sub(current_timestamp(),interval 90 day)) 
  and (player1='13213' or player2='13213' or player3='13213' or player4='13213' 
or player5='13213' or player6='13213') 
  group by variation limit 3)

shows using index gmtdate

explain
(SELECT count(gamename) as gname ,variation from zertz_gamerecord 
where  (gmtdate  date_sub(current_timestamp(),interval 90 day)) and 
(player1='13213' or player2='13213' ) group by variation limit 3) 
UNION 
(SELECT count(gamename) as gname ,variation from mp_gamerecord 
  where  (gmtdate  date_sub(current_timestamp(),interval 90 day)) 
  and (player1='13213' or player2='13213' or player3='13213' or player4='13213' 
or player5='13213' or player6='13213') 
  group by variation limit 3) 

ie: the same two queries shows using no indexes on the first half
of the query.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Efficient use of sub queries?

2011-07-15 Thread J B
I was wondering if any one could point out potential problems with the
following query or if there was a better alternative

From a list of users I want to return all who don't have all the specified
user_profile options or those who do not have at least one preference set to
1. The following query seems to work fine.

SELECT DISTINCT(c.id) FROM user c
 WHERE c.id IN (901,907)
 AND
((SELECT COUNT(id) FROM user_profile
  WHERE option_id in (747,749) AND user_id=c.id)  2
 OR
(SELECT COUNT(id) FROM user_profile
  WHERE option_id in (747,749) AND user_id=c.id AND
preference = 1) != 1
 )


The table definition

CREATE TABLE `user_profile` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `option_id` int(11) NOT NULL,
  `user_id` int(11) NOT NULL,
  `preference` decimal(10,2) DEFAULT '0.00',
  `created_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  KEY `user_profile_user_id` (`user_id`),
  KEY `user_profile_option_id` (`option_id`),
  CONSTRAINT `user_profile_user_id` FOREIGN KEY (`user_id`) REFERENCES
`user` (`id`),
  CONSTRAINT `user_profile_option_id` FOREIGN KEY (`option_id`) REFERENCES
`option` (`id`),
) ENGINE=InnoDB;




Thanks


Simplify 4 queries into 1

2011-02-13 Thread Andre Polykanine
Hi all,
Hope this question is appropriate here :-).
I've got 4 queries:

$q1=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1);
$q2=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='2' ORDER BY RAND() LIMIT 1);
$q3=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='3' ORDER BY RAND() LIMIT 1);
$q4=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='4' ORDER BY RAND() LIMIT 1);

What  I  need  is  to  select exactly 4 questions so that each of them
would be randomly selected from each category.
Note:  I  don't need 4 *any* questions, I need 1 question from the 1st
category, 1 question from the 2nd category etc.
Is there a way to make one query for this?
If I do
SELECT  *  FROM  `CandidateQuestions`  WHERE  `Category` IN (1,2,3,4)
GROUP BY `Category` LIMIT 4,
I think i would get 4 random questions, wouldn't I?
Thanks in advance!

  

-- 
With best regards from Ukraine,
Andre
Skype: Francophile
Twitter: http://twitter.com/m_elensule
Facebook: http://facebook.com/menelion


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Simplify 4 queries into 1

2011-02-13 Thread mos

At 05:39 PM 2/13/2011, Andre Polykanine wrote:

Hi all,
Hope this question is appropriate here :-).
I've got 4 queries:

$q1=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1);
$q2=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='2' ORDER BY RAND() LIMIT 1);
$q3=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='3' ORDER BY RAND() LIMIT 1);
$q4=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='4' ORDER BY RAND() LIMIT 1);

What  I  need  is  to  select exactly 4 questions so that each of them
would be randomly selected from each category.
Note:  I  don't need 4 *any* questions, I need 1 question from the 1st
category, 1 question from the 2nd category etc.
Is there a way to make one query for this?
If I do
SELECT  *  FROM  `CandidateQuestions`  WHERE  `Category` IN (1,2,3,4)
GROUP BY `Category` LIMIT 4,
I think i would get 4 random questions, wouldn't I?


Correct.

You could use:

SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1
union SELECT *FROM`CandidateQuestions`WHERE
`Category`='2' ORDER BY RAND() LIMIT 1
union SELECT *FROM`CandidateQuestions`WHERE
`Category`='3' ORDER BY RAND() LIMIT 1
union SELECT *FROM`CandidateQuestions`WHERE
`Category`='4' ORDER BY RAND() LIMIT 1

and this returns 4 rows, with 1 random record from each category.

Mike



Thanks in advance!



--
With best regards from Ukraine,
Andre
Skype: Francophile
Twitter: http://twitter.com/m_elensule
Facebook: http://facebook.com/menelion


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Running Queries When INSERTing Data?

2010-11-08 Thread Hal Vaughan
I'm redesigning some software that's been in use since 2002.  I'll be working 
with databases that will start small and grow along the way.  

In the old format, data would come to us in mega-big text files that had to be 
parsed and manipulated and so on with Perl to remove crap and finally produce 
one tab delimited file.  Once that file was created, another simple program 
would go through and use the data in each line for an INSERT statement that 
would put the data in a table.  This table also has an Idx field that is an 
auto-incrementing primary key for the table.

Each night at 3:30 am, a program would run and would go through the same 
process for each client.  I never timed it, but it could take something like 
30-60 seconds per client, but timing wasn't a major issue, since it had a LONG 
time from then until new data would be inserted into the DB.  The SELECT 
statements to pull the data for each client involve a number of AND and OR 
conditions.  The first one of these would create a temporary table with its 
results, then another long SELECT statement would create a 2nd temporary table 
by filtering the data out more.  This would continue for a few temporary tables 
until the data was filtered.  Then it would be packaged up and encrypted, then 
sent out to the client, who has a program on his computer to read that data and 
print it out if desired.

This has worked, but for a number of reasons, a once-a-day data pull and send 
won't work as well with the new design.  The program on the clients' computers 
will be able to access a special server just for them directly.  (I know the 
concept of a server for each human client sounds inefficient, but it actually 
improves operations in a number of ways.)  So each server will only have to 
provide data for one client.

The big difference is that I'd like to make it so they can access the data 
live, or almost live.  I don't mean all the data, but the subset that meets 
their needs.  In other words, the equivalent of what was sent to them daily in 
the old system.  Their individual servers will still get the big tab-delimited 
file that will still be INSERTed in to their DB line by line.  But I'd like to 
be able to select from the new data as it comes in, once it's been given a new 
number in the Idx field.

Is there any way to run a row of data through SELECT queries as it is being 
INSERTed into a table -- or just after?

The reason for doing this, instead of INSERTing all the data, then running a 
program is that as the database grows, pulling out the data will take longer 
and longer, so if there were a way to screen data as it comes in, that would 
make it easier to provide instantly available data.

I also know my knowledge of MySQL is quite limited, so if this can be done in 
better ways, I'd be interested in hearing about them.

Thank you!



Hal
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Running Queries When INSERTing Data?

2010-11-08 Thread Gavin Towey
I'm not sure I understand exactly what you mean, but I think you just need to 
keep a timestamp associated with each row as it is inserted, put an index on 
it, then you can select new data just by using the appropriate time range.

Also, if you're parsing files into tab delimited format, you don't need to 
write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
INFILE which takes delimited text files and inserts data in bulk, which is much 
faster than inserting line by line.

Regards,
Gavin Towey

-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com]
Sent: Monday, November 08, 2010 10:18 AM
To: mysql@lists.mysql.com
Subject: Running Queries When INSERTing Data?

I'm redesigning some software that's been in use since 2002.  I'll be working 
with databases that will start small and grow along the way.

In the old format, data would come to us in mega-big text files that had to be 
parsed and manipulated and so on with Perl to remove crap and finally produce 
one tab delimited file.  Once that file was created, another simple program 
would go through and use the data in each line for an INSERT statement that 
would put the data in a table.  This table also has an Idx field that is an 
auto-incrementing primary key for the table.

Each night at 3:30 am, a program would run and would go through the same 
process for each client.  I never timed it, but it could take something like 
30-60 seconds per client, but timing wasn't a major issue, since it had a LONG 
time from then until new data would be inserted into the DB.  The SELECT 
statements to pull the data for each client involve a number of AND and OR 
conditions.  The first one of these would create a temporary table with its 
results, then another long SELECT statement would create a 2nd temporary table 
by filtering the data out more.  This would continue for a few temporary tables 
until the data was filtered.  Then it would be packaged up and encrypted, then 
sent out to the client, who has a program on his computer to read that data and 
print it out if desired.

This has worked, but for a number of reasons, a once-a-day data pull and send 
won't work as well with the new design.  The program on the clients' computers 
will be able to access a special server just for them directly.  (I know the 
concept of a server for each human client sounds inefficient, but it actually 
improves operations in a number of ways.)  So each server will only have to 
provide data for one client.

The big difference is that I'd like to make it so they can access the data 
live, or almost live.  I don't mean all the data, but the subset that meets 
their needs.  In other words, the equivalent of what was sent to them daily in 
the old system.  Their individual servers will still get the big tab-delimited 
file that will still be INSERTed in to their DB line by line.  But I'd like to 
be able to select from the new data as it comes in, once it's been given a new 
number in the Idx field.

Is there any way to run a row of data through SELECT queries as it is being 
INSERTed into a table -- or just after?

The reason for doing this, instead of INSERTing all the data, then running a 
program is that as the database grows, pulling out the data will take longer 
and longer, so if there were a way to screen data as it comes in, that would 
make it easier to provide instantly available data.

I also know my knowledge of MySQL is quite limited, so if this can be done in 
better ways, I'd be interested in hearing about them.

Thank you!



Hal
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Running Queries When INSERTing Data?

2010-11-08 Thread Hal Vaughan

On Nov 8, 2010, at 1:23 PM, Gavin Towey wrote:

 I'm not sure I understand exactly what you mean, but I think you just need to 
 keep a timestamp associated with each row as it is inserted, put an index on 
 it, then you can select new data just by using the appropriate time range.

But won't that take just as long as any other queries?  Or will it be speeded 
up because all the matching records would be adjacent to each other -- like all 
at the end?

 Also, if you're parsing files into tab delimited format, you don't need to 
 write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
 INFILE which takes delimited text files and inserts data in bulk, which is 
 much faster than inserting line by line.

THANKS!  Is this new?  I never read about it before, but then again, from about 
2005-2006 until now the system was running smoothly and that's a long gap with 
no programming work!


Hal

 
 Regards,
 Gavin Towey
 
 -Original Message-
 From: Hal Vaughan [mailto:h...@halblog.com]
 Sent: Monday, November 08, 2010 10:18 AM
 To: mysql@lists.mysql.com
 Subject: Running Queries When INSERTing Data?
 
 I'm redesigning some software that's been in use since 2002.  I'll be working 
 with databases that will start small and grow along the way.
 
 In the old format, data would come to us in mega-big text files that had to 
 be parsed and manipulated and so on with Perl to remove crap and finally 
 produce one tab delimited file.  Once that file was created, another simple 
 program would go through and use the data in each line for an INSERT 
 statement that would put the data in a table.  This table also has an Idx 
 field that is an auto-incrementing primary key for the table.
 
 Each night at 3:30 am, a program would run and would go through the same 
 process for each client.  I never timed it, but it could take something like 
 30-60 seconds per client, but timing wasn't a major issue, since it had a 
 LONG time from then until new data would be inserted into the DB.  The SELECT 
 statements to pull the data for each client involve a number of AND and OR 
 conditions.  The first one of these would create a temporary table with its 
 results, then another long SELECT statement would create a 2nd temporary 
 table by filtering the data out more.  This would continue for a few 
 temporary tables until the data was filtered.  Then it would be packaged up 
 and encrypted, then sent out to the client, who has a program on his computer 
 to read that data and print it out if desired.
 
 This has worked, but for a number of reasons, a once-a-day data pull and send 
 won't work as well with the new design.  The program on the clients' 
 computers will be able to access a special server just for them directly.  (I 
 know the concept of a server for each human client sounds inefficient, but it 
 actually improves operations in a number of ways.)  So each server will only 
 have to provide data for one client.
 
 The big difference is that I'd like to make it so they can access the data 
 live, or almost live.  I don't mean all the data, but the subset that meets 
 their needs.  In other words, the equivalent of what was sent to them daily 
 in the old system.  Their individual servers will still get the big 
 tab-delimited file that will still be INSERTed in to their DB line by line.  
 But I'd like to be able to select from the new data as it comes in, once it's 
 been given a new number in the Idx field.
 
 Is there any way to run a row of data through SELECT queries as it is being 
 INSERTed into a table -- or just after?
 
 The reason for doing this, instead of INSERTing all the data, then running a 
 program is that as the database grows, pulling out the data will take longer 
 and longer, so if there were a way to screen data as it comes in, that would 
 make it easier to provide instantly available data.
 
 I also know my knowledge of MySQL is quite limited, so if this can be done in 
 better ways, I'd be interested in hearing about them.
 
 Thank you!
 
 
 
 Hal
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
 
 
 This message contains confidential information and is intended only for the 
 individual named.  If you are not the named addressee, you are notified that 
 reviewing, disseminating, disclosing, copying or distributing this e-mail is 
 strictly prohibited.  Please notify the sender immediately by e-mail if you 
 have received this e-mail by mistake and delete this e-mail from your system. 
 E-mail transmission cannot be guaranteed to be secure or error-free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any loss or damage caused by viruses or errors or omissions in 
 the contents of this message, which arise as a result of e-mail transmission

RE: Running Queries When INSERTing Data?

2010-11-08 Thread BMBasal
 But won't that take just as long as any other queries?  Or will it be
speeded up because all the matching records would be adjacent to each other
-- like all at the end?

You can order the result data set by timestamp in descending order, so the
latest will come up first, i.e., LIFO.

-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com] 
Sent: Monday, November 08, 2010 1:39 PM
To: mysql@lists.mysql.com
Subject: Re: Running Queries When INSERTing Data?


On Nov 8, 2010, at 1:23 PM, Gavin Towey wrote:

 I'm not sure I understand exactly what you mean, but I think you just need
to keep a timestamp associated with each row as it is inserted, put an index
on it, then you can select new data just by using the appropriate time
range.

But won't that take just as long as any other queries?  Or will it be
speeded up because all the matching records would be adjacent to each other
-- like all at the end?

 Also, if you're parsing files into tab delimited format, you don't need to
write a separate parser to insert rows line by line.  MySQL has LOAD DATA
INFILE which takes delimited text files and inserts data in bulk, which is
much faster than inserting line by line.

THANKS!  Is this new?  I never read about it before, but then again, from
about 2005-2006 until now the system was running smoothly and that's a long
gap with no programming work!


Hal

 
 Regards,
 Gavin Towey
 
 -Original Message-
 From: Hal Vaughan [mailto:h...@halblog.com]
 Sent: Monday, November 08, 2010 10:18 AM
 To: mysql@lists.mysql.com
 Subject: Running Queries When INSERTing Data?
 
 I'm redesigning some software that's been in use since 2002.  I'll be
working with databases that will start small and grow along the way.
 
 In the old format, data would come to us in mega-big text files that had
to be parsed and manipulated and so on with Perl to remove crap and finally
produce one tab delimited file.  Once that file was created, another simple
program would go through and use the data in each line for an INSERT
statement that would put the data in a table.  This table also has an Idx
field that is an auto-incrementing primary key for the table.
 
 Each night at 3:30 am, a program would run and would go through the same
process for each client.  I never timed it, but it could take something like
30-60 seconds per client, but timing wasn't a major issue, since it had a
LONG time from then until new data would be inserted into the DB.  The
SELECT statements to pull the data for each client involve a number of AND
and OR conditions.  The first one of these would create a temporary table
with its results, then another long SELECT statement would create a 2nd
temporary table by filtering the data out more.  This would continue for a
few temporary tables until the data was filtered.  Then it would be packaged
up and encrypted, then sent out to the client, who has a program on his
computer to read that data and print it out if desired.
 
 This has worked, but for a number of reasons, a once-a-day data pull and
send won't work as well with the new design.  The program on the clients'
computers will be able to access a special server just for them directly.
(I know the concept of a server for each human client sounds inefficient,
but it actually improves operations in a number of ways.)  So each server
will only have to provide data for one client.
 
 The big difference is that I'd like to make it so they can access the data
live, or almost live.  I don't mean all the data, but the subset that meets
their needs.  In other words, the equivalent of what was sent to them daily
in the old system.  Their individual servers will still get the big
tab-delimited file that will still be INSERTed in to their DB line by line.
But I'd like to be able to select from the new data as it comes in, once
it's been given a new number in the Idx field.
 
 Is there any way to run a row of data through SELECT queries as it is
being INSERTed into a table -- or just after?
 
 The reason for doing this, instead of INSERTing all the data, then running
a program is that as the database grows, pulling out the data will take
longer and longer, so if there were a way to screen data as it comes in,
that would make it easier to provide instantly available data.
 
 I also know my knowledge of MySQL is quite limited, so if this can be done
in better ways, I'd be interested in hearing about them.
 
 Thank you!
 
 
 
 Hal
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
 
 
 This message contains confidential information and is intended only for
the individual named.  If you are not the named addressee, you are notified
that reviewing, disseminating, disclosing, copying or distributing this
e-mail is strictly prohibited.  Please notify the sender immediately by
e-mail if you have received this e-mail by mistake and delete this e-mail
from your system. E-mail

RE: Running Queries When INSERTing Data?

2010-11-08 Thread Gavin Towey
If you are selecting records within a certain time range that is a subset of 
the entire set of data, then indexes which use the timestamp column will be 
fine.

More generally: create appropriate indexes to optimize queries.

Although typically, you should design the database to be correct first, then 
identify performance bottlenecks in the production system, and fix them 
individually.  Trying to predict performance is difficult.  If you're 
concerned, then it's best to create a test that reproduces the exact situation 
of which you are unsure so you can observe real performance, and experiment 
with optimizations.


-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com]
Sent: Monday, November 08, 2010 10:39 AM
To: mysql@lists.mysql.com
Subject: Re: Running Queries When INSERTing Data?


On Nov 8, 2010, at 1:23 PM, Gavin Towey wrote:

 I'm not sure I understand exactly what you mean, but I think you just need to 
 keep a timestamp associated with each row as it is inserted, put an index on 
 it, then you can select new data just by using the appropriate time range.

But won't that take just as long as any other queries?  Or will it be speeded 
up because all the matching records would be adjacent to each other -- like all 
at the end?

 Also, if you're parsing files into tab delimited format, you don't need to 
 write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
 INFILE which takes delimited text files and inserts data in bulk, which is 
 much faster than inserting line by line.

THANKS!  Is this new?  I never read about it before, but then again, from about 
2005-2006 until now the system was running smoothly and that's a long gap with 
no programming work!


Hal


 Regards,
 Gavin Towey

 -Original Message-
 From: Hal Vaughan [mailto:h...@halblog.com]
 Sent: Monday, November 08, 2010 10:18 AM
 To: mysql@lists.mysql.com
 Subject: Running Queries When INSERTing Data?

 I'm redesigning some software that's been in use since 2002.  I'll be working 
 with databases that will start small and grow along the way.

 In the old format, data would come to us in mega-big text files that had to 
 be parsed and manipulated and so on with Perl to remove crap and finally 
 produce one tab delimited file.  Once that file was created, another simple 
 program would go through and use the data in each line for an INSERT 
 statement that would put the data in a table.  This table also has an Idx 
 field that is an auto-incrementing primary key for the table.

 Each night at 3:30 am, a program would run and would go through the same 
 process for each client.  I never timed it, but it could take something like 
 30-60 seconds per client, but timing wasn't a major issue, since it had a 
 LONG time from then until new data would be inserted into the DB.  The SELECT 
 statements to pull the data for each client involve a number of AND and OR 
 conditions.  The first one of these would create a temporary table with its 
 results, then another long SELECT statement would create a 2nd temporary 
 table by filtering the data out more.  This would continue for a few 
 temporary tables until the data was filtered.  Then it would be packaged up 
 and encrypted, then sent out to the client, who has a program on his computer 
 to read that data and print it out if desired.

 This has worked, but for a number of reasons, a once-a-day data pull and send 
 won't work as well with the new design.  The program on the clients' 
 computers will be able to access a special server just for them directly.  (I 
 know the concept of a server for each human client sounds inefficient, but it 
 actually improves operations in a number of ways.)  So each server will only 
 have to provide data for one client.

 The big difference is that I'd like to make it so they can access the data 
 live, or almost live.  I don't mean all the data, but the subset that meets 
 their needs.  In other words, the equivalent of what was sent to them daily 
 in the old system.  Their individual servers will still get the big 
 tab-delimited file that will still be INSERTed in to their DB line by line.  
 But I'd like to be able to select from the new data as it comes in, once it's 
 been given a new number in the Idx field.

 Is there any way to run a row of data through SELECT queries as it is being 
 INSERTed into a table -- or just after?

 The reason for doing this, instead of INSERTing all the data, then running a 
 program is that as the database grows, pulling out the data will take longer 
 and longer, so if there were a way to screen data as it comes in, that would 
 make it easier to provide instantly available data.

 I also know my knowledge of MySQL is quite limited, so if this can be done in 
 better ways, I'd be interested in hearing about them.

 Thank you!



 Hal
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto

How to kill locked queries

2010-10-14 Thread monloi perez
All,

Is there a mysql configuration to kill queries that have been locked for quite 
some time. If there's none what is an alternative approach to kill these locked 
queries and what is the root cause of it?

Thanks,
Mon


  

Re: How to kill locked queries

2010-10-14 Thread Johan De Meersman
The root cause is another query that has tables locked that your locked
queries want. Behind that may be, for example, an inefficient but
often-executed query, high I/O concurrency that has a cumulative slowing
effect, or maybe simply a long-running update that might be better scheduled
during the quiet hours.

You can kill the locked queries, but it is likely that more will simply
appear in their place. You can kill the evil monster query, but depending on
what's actually going on it's quite possible that one of the waiting ones
will take it's place as the resource hog.

Sometimes a quick fix is to simply restart the service, if there's just too
much competition for the same locks; but obviously the problem will just
reappear later.

What you need to do is figure out which query is holding the locks, and see
if you can optimize and/or reschedule it.


On Thu, Oct 14, 2010 at 9:01 AM, monloi perez mlp_fol...@yahoo.com wrote:

 All,

 Is there a mysql configuration to kill queries that have been locked for
 quite
 some time. If there's none what is an alternative approach to kill these
 locked
 queries and what is the root cause of it?

 Thanks,
 Mon







-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: How to kill locked queries

2010-10-14 Thread Claudio Nanni
Hi Mon,

Killing locked queries is not the first step in database tuning.
Queries locked for a long time usually depend on slow updates that lock
other updates or selects,
this happen on MyISAM (or table level locking engines).
If you are really sure you want and can without problems kill the queries
you can do a simple job to check and kill.
I would rather check why those queries are locked anyway.

Aloha

Claudio


2010/10/14 monloi perez mlp_fol...@yahoo.com

 All,

 Is there a mysql configuration to kill queries that have been locked for
 quite
 some time. If there's none what is an alternative approach to kill these
 locked
 queries and what is the root cause of it?

 Thanks,
 Mon







-- 
Claudio


Re: How to kill locked queries

2010-10-14 Thread monloi perez
Does this happen if your table is InnoDB?

Thanks all,
Mon




From: Claudio Nanni claudio.na...@gmail.com
To: monloi perez mlp_fol...@yahoo.com
Cc: mysql mailing list mysql@lists.mysql.com
Sent: Thu, October 14, 2010 3:16:38 PM
Subject: Re: How to kill locked queries

Hi Mon,

Killing locked queries is not the first step in database tuning.
Queries locked for a long time usually depend on slow updates that lock other 
updates or selects,
this happen on MyISAM (or table level locking engines).
If you are really sure you want and can without problems kill the queries you 
can do a simple job to check and kill.
I would rather check why those queries are locked anyway.

Aloha

Claudio



2010/10/14 monloi perez mlp_fol...@yahoo.com

All,

Is there a mysql configuration to kill queries that have been locked for quite
some time. If there's none what is an alternative approach to kill these locked
queries and what is the root cause of it?

Thanks,
Mon


 


-- 
Claudio



  

Re: How to kill locked queries

2010-10-14 Thread Johan De Meersman
On Thu, Oct 14, 2010 at 9:19 AM, monloi perez mlp_fol...@yahoo.com wrote:

 Does this happen if your table is InnoDB?



That depends on the type of lock. If no lock type is specified, InnDB will
prefer row locks, while MyISAM will do table locks.

That may help, unless all your queries are trying to access the same rows
anyway :-)  You really need to figure out *what* is being locked and *why*
before you can fix it.

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: How to kill locked queries

2010-10-14 Thread Perrin Harkins
On Thu, Oct 14, 2010 at 3:28 AM, Johan De Meersman vegiv...@tuxera.be wrote:
 That depends on the type of lock. If no lock type is specified, InnDB will
 prefer row locks, while MyISAM will do table locks.

 That may help, unless all your queries are trying to access the same rows
 anyway :-)

Even that can work without locking in InnoDB if only one query is
trying to modify the rows.  Unlike MyISAM, readers do not block
writers in InnoDB and vice versa.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: opening a server to generalized queries but not too far

2010-06-18 Thread Don Cohen
Raj Shekhar writes:

  One option here might be to use mysql proxy as a man-in-the-middle and
  filter out unwanted queries...
This seems more or less the same as what I'm doing now with php.
The same question applies there - what would you look for in your
filter?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: opening a server to generalized queries but not too far

2010-06-17 Thread Raj Shekhar
In infinite wisdom don-mysq...@isis.cs3-inc.com (Don Cohen) wrote:
 
 The question is what I have to prohibit in order to prevent either
 updates or access to other tables, or perhaps other things that
 I should be worried about but haven't yet thought of.


One option here might be to use mysql proxy as a man-in-the-middle and
filter out unwanted queries.  You can find an example on how to do this
with mysql proxy on the mysql forge wiki
http://forge.mysql.com/tools/tool.php?id=108 (more stuff
http://forge.mysql.com/tools/search.php?t=tagk=mysqlproxy) 

(in case you do not know mysql proxy, see
http://www.oreillynet.com/pub/a/databases/2007/07/12/getting-started-with-mysql-proxy.html?page=1
 )
 
-- 
Raj Shekhar
-
If there's anything more important than my ego around, I want it
caught and shot now.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



opening a server to generalized queries but not too far

2010-06-16 Thread Don Cohen

This seems like a topic that must have been studied, but I'm having
trouble figuring out what to search for in Google, since the usual
discussion of sql injection is not what I'm looking for here.
If anyone knows of references that discuss the issue, I'd like to
see them.  I'm also interested in answers for other RDBMS's,
and I imagine that details of implementation may matter, but my
immediate primary interest is mysql used from php.

I want to allow web users to make a very wide variety of queries, but 
limited to queries (no updates, redefinitions, etc), and limited to a
fixed set of tables - let's suppose one table with no joins, and
perhaps a few other restrictions.

I propose to send queries of the following form from php to the DB:
 select colspec from fixedtable
   where wherespec group by groupspec order by orderspec
The user gets to supply all of the spec's.
So, as an example, I want the user to be able to do
 select max(col1) from fixedtable group by col2

The question is what I have to prohibit in order to prevent either
updates or access to other tables, or perhaps other things that
I should be worried about but haven't yet thought of.

So far I have identified at least one problem, which is subqueries
such as 
 select 1 from ... where exists (select 1 from othertable ...)
These can tell the attacker about other data he should not be able to
read.  At the moment I plan to simply disallow inputs containing the
string select (case insensitive).  Is there any way to get a select
statement to execute other statements, such as insert, delete, drop?

I believe that ; is not a problem because a single request from php
to mysql containing multiple statements will result in a syntax error.
If I subject the inputs to mysql_real_escape_string then the user will
not be able to use quotes, which will prevent use of string constants.
What more could an attacker do if I don't escape the inputs?

Finally, suppose I want to limit access to the table to the rows 
where col1=value1.  If I just add that to wherespec what can an
attacker do to read other rows?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: opening a server to generalized queries but not too far

2010-06-16 Thread Adam Alkins
Sounds like you just want to GRANT access to specific tables (and with
limited commands), which is exactly what MySQL's privilege system does.

Refer to http://dev.mysql.com/doc/refman/5.1/en/grant.html

http://dev.mysql.com/doc/refman/5.1/en/grant.htmlFor example, you can
grant only SELECT privileges to a specific table for a specific user.

-- 
Adam Alkins || http://www.rasadam.com

On 16 June 2010 14:17, Don Cohen don-mysq...@isis.cs3-inc.com wrote:


 This seems like a topic that must have been studied, but I'm having
 trouble figuring out what to search for in Google, since the usual
 discussion of sql injection is not what I'm looking for here.
 If anyone knows of references that discuss the issue, I'd like to
 see them.  I'm also interested in answers for other RDBMS's,
 and I imagine that details of implementation may matter, but my
 immediate primary interest is mysql used from php.

 I want to allow web users to make a very wide variety of queries, but
 limited to queries (no updates, redefinitions, etc), and limited to a
 fixed set of tables - let's suppose one table with no joins, and
 perhaps a few other restrictions.

 I propose to send queries of the following form from php to the DB:
  select colspec from fixedtable
   where wherespec group by groupspec order by orderspec
 The user gets to supply all of the spec's.
 So, as an example, I want the user to be able to do
  select max(col1) from fixedtable group by col2

 The question is what I have to prohibit in order to prevent either
 updates or access to other tables, or perhaps other things that
 I should be worried about but haven't yet thought of.

 So far I have identified at least one problem, which is subqueries
 such as
  select 1 from ... where exists (select 1 from othertable ...)
 These can tell the attacker about other data he should not be able to
 read.  At the moment I plan to simply disallow inputs containing the
 string select (case insensitive).  Is there any way to get a select
 statement to execute other statements, such as insert, delete, drop?

 I believe that ; is not a problem because a single request from php
 to mysql containing multiple statements will result in a syntax error.
 If I subject the inputs to mysql_real_escape_string then the user will
 not be able to use quotes, which will prevent use of string constants.
 What more could an attacker do if I don't escape the inputs?

 Finally, suppose I want to limit access to the table to the rows
 where col1=value1.  If I just add that to wherespec what can an
 attacker do to read other rows?


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=my...@adamalkins.com




Re: opening a server to generalized queries but not too far

2010-06-16 Thread Don Cohen
Adam Alkins writes:
  Sounds like you just want to GRANT access to specific tables (and with
  limited commands), which is exactly what MySQL's privilege system does.

How about this part?
   Finally, suppose I want to limit access to the table to the rows
   where col1=value1.  If I just add that to wherespec what can an
   attacker do to read other rows?

The http request I have in mind will be something like
 https://server.foo.com?user=johnpassword=wxyz;...
and the resulting query something like
 select ... from table where user=john and ...
(I will first have verified the password.)

It seems I'd have to create a separate mysql user for each user in
my table, and perhaps also a separate separate table (or view?) for
that user to be allowed to read.
I suppose the php process could create the table/view, create the
user, then connect as the user to execute the query, then remove the
user.

Even if this turns out to be the best solution, I'm interested in
the answer to the original question.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: opening a server to generalized queries but not too far

2010-06-16 Thread Adam Alkins
MySQL doesn't have row level permissions, but this is what VIEWS are for. If
you only want access to specific rows, create a view with that subset of
data. You can create a function (privilege bound) to create the view to make
this more dynamic.

If you want direct access to the database, then you will need multiple MySQL
users. I don't quite get the purpose of what you're trying to do through a
PHP script; I see little reason why you can't use some client level security
to facilitate data access, especially if you're going to implement dynamic
creation of users in the PHP script itself (which would be rather weak
security imho).

-- 
Adam Alkins || http://www.rasadam.com

On 16 June 2010 14:48, Don Cohen don-mysq...@isis.cs3-inc.com wrote:

 Adam Alkins writes:
   Sounds like you just want to GRANT access to specific tables (and with
   limited commands), which is exactly what MySQL's privilege system does.

 How about this part?
Finally, suppose I want to limit access to the table to the rows
where col1=value1.  If I just add that to wherespec what can an
attacker do to read other rows?

 The http request I have in mind will be something like
  https://server.foo.com?user=johnpassword=wxyz;...
 and the resulting query something like
  select ... from table where user=john and ...
 (I will first have verified the password.)

 It seems I'd have to create a separate mysql user for each user in
 my table, and perhaps also a separate separate table (or view?) for
 that user to be allowed to read.
 I suppose the php process could create the table/view, create the
 user, then connect as the user to execute the query, then remove the
 user.

 Even if this turns out to be the best solution, I'm interested in
 the answer to the original question.




RE: opening a server to generalized queries but not too far

2010-06-16 Thread Daevid Vincent
 

 -Original Message-
 From: Don Cohen [mailto:don-mysq...@isis.cs3-inc.com] 

 The http request I have in mind will be something like
  https://server.foo.com?user=johnpassword=wxyz;...
 and the resulting query something like
  select ... from table where user=john and ...
 (I will first have verified the password.)

For the love of God and all that is holy, 
do NOT put the user/pass on the URL like that!!

Do something like this instead:
http://us.php.net/manual/en/features.http-auth.php

Or use mod_auth_mysql to maintain your 'authorized' users to your page.

And as Adam beat me to, use a VIEW to expose ONLY the columns and joins you
want. This is also a good time to normalize the data and column names so
that you don't expose what their internal names are.

http://dev.mysql.com/doc/refman/5.0/en/create-view.html
http://www.techotopia.com/index.php/An_Introduction_to_MySQL_Views
http://www.devshed.com/c/a/MySQL/Views-and-More-in-MySQL-50/

But also has he said, I don't see what you're trying to accomplish. If
someone is technically literate to format SQL statements, then just give
them a read-only account to the mysql (or view) directly. Let them use
their own GUI tool like SQLYog or whatever -- it will be far more robust
than anything you can write yourself.

If you're trying to do some reports, then just code up the reports and
use select boxes for the options you want someone to choose. Use jQuery and
table_sorter plugin and you're done.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: opening a server to generalized queries but not too far

2010-06-16 Thread Don Cohen
Daevid Vincent writes:

  For the love of God and all that is holy, 
  do NOT put the user/pass on the URL like that!!
What's so unholy (or even unwise) about it?

  Or use mod_auth_mysql to maintain your 'authorized' users to your page.
Why is this so much better?
In my case it's worse cause I don't want this to be interactive.
I want to install something on a user's machine that will access his
data without him having to do anything.  The url is to be used by
a program, not a person on a browser.

  And as Adam beat me to, use a VIEW to expose ONLY the columns and joins you
  want. This is also a good time to normalize the data and column names so
  that you don't expose what their internal names are.
So far I don't like this solution.

  But also has he said, I don't see what you're trying to accomplish. If
I'm trying to let a large number of users each access his own data
and not everyone else's in a very flexible way, in particular,
allowing selection using functions, aggregations, filters.

  someone is technically literate to format SQL statements, then just give
  them a read-only account to the mysql (or view) directly. Let them use
  their own GUI tool like SQLYog or whatever -- it will be far more robust
  than anything you can write yourself.
In this case there may be a lot of users but the queries are likely to
be written by a small number.

  If you're trying to do some reports, then just code up the reports and
  use select boxes for the options you want someone to choose. Use jQuery and
  table_sorter plugin and you're done.
I can't predict what options will be needed.
And this seems much easier.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: opening a server to generalized queries but not too far

2010-06-16 Thread Wm Mussatto
On Wed, June 16, 2010 14:47, Don Cohen wrote:
 Daevid Vincent writes:

   For the love of God and all that is holy,
   do NOT put the user/pass on the URL like that!!
 What's so unholy (or even unwise) about it?
The username and password shows up in logs on the server and in the
browser's cache since it is part of the page's address.  Anyone who has
access to either will get them.  Remember, browser's cache history.
   Or use mod_auth_mysql to maintain your 'authorized' users to your
 page.
 Why is this so much better?
 In my case it's worse cause I don't want this to be interactive.
 I want to install something on a user's machine that will access his
 data without him having to do anything.  The url is to be used by
 a program, not a person on a browser.

   And as Adam beat me to, use a VIEW to expose ONLY the columns and joins
 you
   want. This is also a good time to normalize the data and column names
 so
   that you don't expose what their internal names are.
 So far I don't like this solution.

   But also has he said, I don't see what you're trying to accomplish. If
 I'm trying to let a large number of users each access his own data
 and not everyone else's in a very flexible way, in particular,
 allowing selection using functions, aggregations, filters.

   someone is technically literate to format SQL statements, then just
 give
   them a read-only account to the mysql (or view) directly. Let them use
   their own GUI tool like SQLYog or whatever -- it will be far more
 robust
   than anything you can write yourself.
 In this case there may be a lot of users but the queries are likely to
 be written by a small number.

   If you're trying to do some reports, then just code up the reports
 and
   use select boxes for the options you want someone to choose. Use jQuery
 and
   table_sorter plugin and you're done.
 I can't predict what options will be needed.
 And this seems much easier.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mussa...@csz.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: opening a server to generalized queries but not too far

2010-06-16 Thread Daevid Vincent
 

 -Original Message-
 From: Don Cohen [mailto:don-mysq...@isis.cs3-inc.com] 
 Sent: Wednesday, June 16, 2010 2:48 PM
 To: Daevid Vincent
 Cc: mysql@lists.mysql.com
 Subject: RE: opening a server to generalized queries but not too far
 
 Daevid Vincent writes:
 
   For the love of God and all that is holy, 
   do NOT put the user/pass on the URL like that!!
 What's so unholy (or even unwise) about it?

Oh my goodness, where to begin...

Well barring the fact that it'll be in the user's cache and browser
history, you are sending this information in plain text across the
internet, open to ANYONE on the way who wishes to packet sniff.

   Or use mod_auth_mysql to maintain your 'authorized' 
 users to your page.
 Why is this so much better?
 In my case it's worse cause I don't want this to be interactive.
 I want to install something on a user's machine that will access his
 data without him having to do anything.  The url is to be used by
 a program, not a person on a browser.

One has nothing to do with the other. 

Mod_auth_mysql will serve as the user/password prompt and authentication
between the user and the page you're creating. It is just an easy way for
YOU to manage the users. If you have their user/password in a database --
presumably you already do, then leverage that. That's all this apache
module will do.

It's no different than you having to use $_GET['user'] and
$_GET['password'] from your URL, querying the DB and granting some data
back, except you've removed the horrific thought of forcing the user to
expose their vitals in plain text in the URL/GET/POST/whatever.

   And as Adam beat me to, use a VIEW to expose ONLY the 
 columns and joins you
   want. This is also a good time to normalize the data and 
 column names so
   that you don't expose what their internal names are.
 So far I don't like this solution.

Interesting. Perhaps you don't understand it? A view seems to suit your
needs very well from what you've explained. It allows a READ ONLY access to
your data. It prevents exposing your column/table names. It normalizes your
data so your customers get it without having to jump through hoops and
knowing intimate details about your schema. Pretty much this is why VIEWs
were invented.

   But also has he said, I don't see what you're trying to 
 accomplish. If
 I'm trying to let a large number of users each access his own data
 and not everyone else's in a very flexible way, in particular,
 allowing selection using functions, aggregations, filters.

Right. And for YOU to code all that up in some fancy web GUI is going to be
a nightmare and will never be all things to all people that you desire. So
you either give people a GUI front end with select boxes and parameters, OR
you give them some direct SQL access to a VIEW, OR you give them some
XML/JSON/Excel/CVS/whatever output that they can then pull into their tool
of choice to manipulate. Why not provide daily SQL dumps of their
normalized data to your users and let them run their reports -- if they're
trying to run SQL queries themselves?

   someone is technically literate to format SQL statements, 
 then just give
   them a read-only account to the mysql (or view) directly. 
 Let them use
   their own GUI tool like SQLYog or whatever -- it will be 
 far more robust
   than anything you can write yourself.
 In this case there may be a lot of users but the queries are likely to
 be written by a small number.
 
   If you're trying to do some reports, then just code up 
 the reports and
   use select boxes for the options you want someone to 
 choose. Use jQuery and
   table_sorter plugin and you're done.
 I can't predict what options will be needed.

Well you don't have to. You just make select boxes and such for each field
that someone would want to select on and various filters. I bet you can
cover 80% of your users with that if not all of them. 

 And this seems much easier.

Oh grasshopper... So Naïve. :)

Here's what I think will happen -- and you can tell me nah nah nah nah in
2 years when you finish writing this uber-tool that does all this fancy SQL
magic manipulation. It will either still fall short of what you wanted it
to do b/c it's too complicated to code, OR it will be too complicated to
use and nobody will understand it enough to use it.

But hey, more power to ya Don if you want to tackle it, go ahead. I would
suggest maybe looking at http://www.phpmyadmin.net 


Daevid.
http://daevid.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: opening a server to generalized queries but not too far

2010-06-16 Thread Don Cohen
Daevid Vincent writes:
For the love of God and all that is holy, 
do NOT put the user/pass on the URL like that!!
   What's so unholy (or even unwise) about it?
  Oh my goodness, where to begin...
  Well barring the fact that it'll be in the user's cache and browser
It won't because this url is not intended to be used in the browser
but in an application.
  history, you are sending this information in plain text across the
  internet, open to ANYONE on the way who wishes to packet sniff.
You might notice that the url started with https so this is also not
true. 

  Mod_auth_mysql will serve as the user/password prompt and authentication
  between the user and the page you're creating. It is just an easy way for
  YOU to manage the users. If you have their user/password in a database --
  presumably you already do, then leverage that. That's all this apache
  module will do.
The user prompt relies on a browser, right?
This url is not being used by a browser.

  It's no different than you having to use $_GET['user'] and
  $_GET['password'] from your URL, querying the DB and granting some data
  back, except you've removed the horrific thought of forcing the user to
  expose their vitals in plain text in the URL/GET/POST/whatever.
No prompt in a browser that's not being used.
No plain text if I use https.

  Interesting. Perhaps you don't understand it? A view seems to suit
  your needs very well from what you've explained. It allows a READ
  ONLY access to your data. It prevents exposing your column/table
  names. It normalizes your data so your customers get it without
  having to jump through hoops and knowing intimate details about
  your schema. Pretty much this is why VIEWs were invented.
You want me to create a view for every user, right?
Or else you want me to create a user and a view every time a user
accesses the server and then delete them?
Either way seems ridiculously complicated compared to adding
 where user = input
The user only has to know about the columns I'm advertising, but
if he wants to look at (or look for) others I don't care.
How does the view simplify matters for him?  He still has to write
pretty much the same query.

  Right. And for YOU to code all that up in some fancy web GUI is
  going to be a nightmare and will never be all things to all people
  that you desire. So you either give people a GUI front end with
  select boxes and parameters, OR you give them some direct SQL
  access to a VIEW, OR you give them some XML/JSON/Excel/CVS/whatever
  output that they can then pull into their tool of choice to
The advantage of the direct sql over xml is that they have access to
things like select max(col), where conditions, etc. so they can 
get back much less data and the results they want rather than having
to get back much more and then still have to compute the results
they want.
So far I don't see that my query allowing colspec,wherespec,
etc. is worse in any way than any of the other suggestions, and
I see ways in which it's better than all of them.

So far

  manipulate. Why not provide daily SQL dumps of their normalized
  data to your users and let them run their reports -- if they're
  trying to run SQL queries themselves?
First, why do you assume these are daily reports?
They are not.  This is an application that displays data that may
be changing much faster than that.  The user runs it, perhaps selects
the data/display he wants to see, then gets the data for that display
up to the minute.  Then if he wishes, he makes another choice.
If he wants to see a small summary of a large amount of data, this
is what databases are for.  I want the max or average to be computed
in the database.  I don't want him to retrieve the entire database
and then compute that at the client.

  Well you don't have to. You just make select boxes and such for
  each field that someone would want to select on and various
  filters. I bet you can cover 80% of your users with that if not all
  of them.
You want to tell me what select boxes to supply?
There are
- lots of different where conditions that might be used
- lots of FUNCTIONS of the data to apply
  (which might be inputs to aggregations and order by's)

   And this seems much easier.

  Oh grasshopper... So Naïve. :)
Well then perhaps you should supply either some actual technical
arguments that show me what's wrong with what I propose, or some
references that contain those arguments.

  Here's what I think will happen -- and you can tell me nah nah nah
  nah in 2 years when you finish writing this uber-tool that does
  all this fancy SQL magic manipulation. It will either still fall
The sql manipulation is to be done by mysql - so I'm now finished
writing that part.

  short of what you wanted it to do b/c it's too complicated to code,
  OR it will be too complicated to use and nobody will understand it
  enough to use it.
If what I have described is too complicated to use then so is mysql.
I expect most end users will run what

Re: Queries not in Cache

2010-05-07 Thread Darvin Denmian
Can somebody help me with this?

Thanks!

On Thu, May 6, 2010 at 10:39 AM, Darvin Denmian
darvin.denm...@gmail.com wrote:
 Hello,

 I've activated the query_cache in Mysql with the variable
 query_cache_limit value to 1 MB.
 My question is:

 How to know what queries wasn't cached because they have exceeded the
 value of query_cache_limit?

 **Sorry for my Brazilian Englihs :(

 Thanks!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Queries not in Cache

2010-05-07 Thread Johan De Meersman
What queries, precisely, I can't tell you, but you can have a good idea
about how your cache performs using the stuff in show global variables;
and the online manuals about what it all means :)

Look at 'show global variables like %qcache%', for a start.

On Fri, May 7, 2010 at 2:22 PM, Darvin Denmian darvin.denm...@gmail.comwrote:

 Can somebody help me with this?

 Thanks!

 On Thu, May 6, 2010 at 10:39 AM, Darvin Denmian
 darvin.denm...@gmail.com wrote:
  Hello,
 
  I've activated the query_cache in Mysql with the variable
  query_cache_limit value to 1 MB.
  My question is:
 
  How to know what queries wasn't cached because they have exceeded the
  value of query_cache_limit?
 
  **Sorry for my Brazilian Englihs :(
 
  Thanks!
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


log-slow-queries

2010-05-07 Thread Stephen Sunderlin

Can't get slow querys to log.  Does this not work in myisam?

*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*

restarted mysqld  - no log.

Created in file in /var/log/mysql/

*snip*
-rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
*snip*

still not writing to the file

I've read
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


looks pretty simple -  not sure what I'm missing.

Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log-slow-queries

2010-05-07 Thread mos

At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:

Can't get slow querys to log.  Does this not work in myisam?


Sure it does. Have you tried:

slow_query_time = 1

Mike



*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*

restarted mysqld  - no log.

Created in file in /var/log/mysql/

*snip*
-rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
*snip*

still not writing to the file

I've read
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


looks pretty simple -  not sure what I'm missing.

Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log-slow-queries

2010-05-07 Thread mos

At 03:58 PM 5/7/2010, you wrote:

At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:

Can't get slow querys to log.  Does this not work in myisam?


Sure it does. Have you tried:

slow_query_time = 1

Mike


Sorry, ignore that previous message. (Serves me right for trying to 
remember it from the top of my head.)


I'm using (Windows):

general_log=0
log-output=FILE
log_queries_not_using_indexes=1
long_query_time=3
slow_query_log=1
slow_query_log_file=U:/mysql5.5/data/SLOWLOG.TXT

I assume you are outputting the slow query log to a text file and not to a 
table.


Mike



*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*

restarted mysqld  - no log.

Created in file in /var/log/mysql/

*snip*
-rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
*snip*

still not writing to the file

I've read
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


looks pretty simple -  not sure what I'm missing.

Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log-slow-queries

2010-05-07 Thread Anirudh Sundar
Hello Stephen,

Did u try this ??

mysql show global variables like '%log_output%';
+---+---+
| Variable_name | Value |
+---+---+
| log_output| FILE  |
+---+---+

If only the log_output is FILE, then the slow queries will get logged in the
log.
mysql set global log_output = FILE; [if you find log_output as TABLE or
NONE]

Let me know if you have any issues...

Cheers,
Anirudh Sundar


On Fri, May 7, 2010 at 10:34 PM, Stephen Sunderlin 
stephen.sunder...@verizon.net wrote:

 Can't get slow querys to log.  Does this not work in myisam?

 *snip*
 [mysqld]
 log-slow-queries = /var/log/mysql/mysql-slow.log
 long_query_time = 1
 *snip*

 restarted mysqld  - no log.

 Created in file in /var/log/mysql/

 *snip*
 -rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
 *snip*

 still not writing to the file

 I've read
 http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html

 http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


 looks pretty simple -  not sure what I'm missing.

 Thanks!



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=sundar.anir...@gmail.com




Queries not in Cache

2010-05-06 Thread Darvin Denmian
Hello,

I've activated the query_cache in Mysql with the variable
query_cache_limit value to 1 MB.
My question is:

How to know what queries wasn't cached because they have exceeded the
value of query_cache_limit?

**Sorry for my Brazilian Englihs :(

Thanks!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Enabling slow log queries for particular db

2010-04-12 Thread RaMeSh
Hi All

How can I get MySQL to only 'log-slow-queries' on specific databases instead
of globally?

-- 
Ramesh


Re: Enabling slow log queries for particular db

2010-04-12 Thread Aravinth
Hi Ramesh,

As of my knowledge we can only enable slow query log globally

Regards,
Aravinth


On Mon, Apr 12, 2010 at 4:01 PM, RaMeSh rames...@gmail.com wrote:

 Hi All

 How can I get MySQL to only 'log-slow-queries' on specific databases
 instead
 of globally?

 --
 Ramesh



Combine Two Queries

2010-04-06 Thread Chris W
I have a content management system.  One of the methods I use to grant 
access to various parts of the site is with Groups.  I can link Pages, 
Users, Modules, etc (objects) to any number of groups.  So a Many to 
Many relationship.  I use the grouplink table to do this.


CREATE TABLE  `grouplink` (
 `LinkType` 
set('user','page','template','templatefile','menu','module') NOT NULL 
DEFAULT '',

 `ID` int(10) unsigned NOT NULL DEFAULT '0',
 `GroupID` int(10) unsigned NOT NULL DEFAULT '0',
 PRIMARY KEY (`LinkType`,`ID`,`GroupID`)
) ;


LinkType indicates what type of  object I am linking to.  If I am 
linking to  a page, ID is the PageID, if to a User, ID is UserID... 
etc.  And GroupID is just the group I am linking the object to. 


The group table looks like this...
CREATE TABLE  `group` (
 `GroupID` int(10) unsigned NOT NULL AUTO_INCREMENT,
 `GroupName` varchar(45) NOT NULL DEFAULT '',
 `Active` tinyint(3) unsigned NOT NULL DEFAULT '1',
 PRIMARY KEY (`GroupID`)
);

My rule to give a user access to any object is the user has to be linked 
to at least one of the same groups that object is linked to.  There is 
one exception to that rule, and that is, if an object isn't linked to 
any groups then it doesn't matter what groups the User is in.  Currently 
I use two queries to implement these rules.  If the Count on the first 
query is 0, they access is granted, if not I execute the second query 
and if the count on it is greater than 0, access is granted.  


SELECT COUNT(`GroupID`)
FROM `grouplink` 
WHERE `LinkType` = '$LinkType' AND `ID` = '$ID'


SELECT COUNT (g.`GroupID`)
FROM `grouplink` u   //Groups the user, UserID is in
JOIN `grouplink` l USING (`GroupID`)   //Groups the LinkType, ID is in
JOIN `group` g USING (`GroupID`)
WHERE u.`LinkType` = 'user' AND l.`LinkType` = '$LinkType'
AND u.`ID` = '$UserID' AND l.`ID` = '$ID'
AND g.`Active`


Is there any way merge these into one query?

Chris W

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



a mysqldump causes all other queries to be queued super slow. Any clue what is going on?

2010-03-22 Thread Andres Salazar
Hello,

Everytime i run a mysqldump (mysql-server-5.0.77) all the other
legitimate queries that are ocurring at that time pretty much sleep
and build up in the processlist untill I either stop the dump or wait
for it finish. The moment i do either one i can have about 8-15
queries waiting they all free up immediately.

What could be causing this? I know there might be many factors
involved but Id like to get some clues as to where to look.

Thanks

Andres

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: a mysqldump causes all other queries to be queued super slow. Any clue what is going on?

2010-03-22 Thread Wm Mussatto
On Mon, March 22, 2010 11:08, Andres Salazar wrote:
 Hello,

 Everytime i run a mysqldump (mysql-server-5.0.77) all the other
 legitimate queries that are ocurring at that time pretty much sleep
 and build up in the processlist untill I either stop the dump or wait
 for it finish. The moment i do either one i can have about 8-15
 queries waiting they all free up immediately.

 What could be causing this? I know there might be many factors
 involved but Id like to get some clues as to where to look.

 Thanks

 Andres

First, what are the table types?
Did you set --lock-tables?  This will lock the tables within a database
That could cause the behavior you are seeing.  I get a similar result when
I do a repair table   For transaction enabled table types try
--single-transaction since it avoids read locks (according to the man
pages.

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: a mysqldump causes all other queries to be queued super slow. Any clue what is going on?

2010-03-22 Thread Andres Salazar
Hello,

On Mon, Mar 22, 2010 at 12:55 PM, Wm Mussatto mussa...@csz.com wrote:


 First, what are the table types?
MyISAM
 Did you set --lock-tables?  No

 That could cause the behavior you are seeing.  I get a similar result when
 I do a repair table   For transaction enabled table types try
 --single-transaction since it avoids read locks (according to the man
 pages.

Thats great however ... this type of result was not being exibited
some months ago.. i know the database was grown . It has also happened
that some big queries done against it also cause the same issue. I
think this is a bigger issue then just a mysqldump.

Iam trying to figure out why or where should i look.

Andres


 --
 William R. Mussatto
 Systems Engineer
 http://www.csz.com
 909-920-9154


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=ndrsslz...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Slow queries when using left join

2010-03-19 Thread Olav Mørkrid
Dear MySQL forum.

I have performance problems when using left join x combined with
where x.y is null, in particularily when combining three tables this
way.

Please contact me by e-mail if you are familiar with these issues and
know how to eliminate slow queries.

I would really appreciate your help.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow queries when using left join

2010-03-19 Thread Johan De Meersman
2010/3/19 Olav Mørkrid olav.mork...@gmail.com

 Dear MySQL forum.

 I have performance problems when using left join x combined with
 where x.y is null, in particularily when combining three tables this
 way.


With a left join, particularly when you're using *is (not) null*, you can't
use index selecting on your right table. That is, you're bound to do a
tablescan on what is essentially the cartesian product of your tables.

Every additional table only compounds the problem. 100x100 is 10.000., but
100x100x100 is 1.000.000.

Avoid left joins whenever possible - in some cases it's quicker to split out
the complex query and implement it in code with loops - not always, though,
you'll have to apply some elbow grease to find out the optimal solution.

The most recent example of this, was a hierarchical lookup query in Drupal's
taxonomy module: the hierarchy table was left-joined to itself five times.
Execution time on an unloaded machine was 0.54 seconds. By doing individual
lookups in a code loop until I got to the top level, I replaced that query
with a maximum of five (and usually less) 0.00 second ones over an existing
covering index.


Another thing - and maybe one you should look at first, is wether you can
add more selective where-clauses for you base table. That doesn't always
stop at the actual data you want, either. Another example from here: for a
radiostation, there was a multiple left-join query to display the last 20
played songs on the homepage. However, the playlist table keeps growing, so
I got the website people to agree that it's pretty unlikely that songs from
yesterday end up in those 20: we added an index on the playdate and selected
on that. Boom, execution time down from 0.35 to 0.01. In addition, killing
off old playlist items would've been very beneficial, but this was not an
option due to business requirements. Shame, I love to delete people's data
:-D


And, of course, check if you have indexes on the major parts of your where
clause. Selectivity brings speed.


I seem to have the order of obviousness in this mail wrong, though. Please
read it from bottom to top :-)


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: slow queries not being logged

2010-02-23 Thread John Daisley



 From: machi...@rdc.co.za
 To: mysql@lists.mysql.com
 Subject: slow queries not being logged
 Date: Tue, 23 Feb 2010 09:59:13 +0200
 
 Good day all
 
  
 
 I hope you can assist me with this one...
 
  
 
 We have a client where the slow query log was disabled.
 

Slow query log is on the server only.  If you are saying you have enabled the 
slow query log and the servers query log is empty can you post your 
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


  
 
 We noticed that the slow query amount (when viewing global
 status) has skyrocketed during the last 2 weeks going up to over 2 million
 (from 160 million queries).
 
  
 
 We wanted to look at these queries to see if it can be
 optimised to reduce the amount and went through the whole database restart
 routine to enable the slow query log again (they are running version 5.0 so
 had to restart).
 
  
 
  
 
 However, even though the slow query log is enabled, it is
 not logging the queries to the file specified.
 
  
 
 Can someone please assist in why this is not being done? I
 thought that it might be logging to a default filename but there is only one
 slow queries log file in the directory and it is empty.
 
  
 
 Checking the global status again, it showed 29 000 slow
 queries since this morning (3 hours ago) but nothing in the logs.
 
  
 
  
 
 Your help will be appreciated.
 
  
 
  
 
 Regards
 
  
 
  
_
Do you have a story that started on Hotmail? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

RE: slow queries not being logged

2010-02-23 Thread Machiel Richards
Hi All

 

I found my problem and this was kind of a blonde moment for
me...

 

When configuring the log_slow_queries parameter, it was
configured as follows:  log_slow_queries=1

 

This the file being created is called 1 and the 1 does not
mean it is enabled.

 

I have fixed this now but need to wait for a gap to reboot
again to have it set properly. (have to live with the filename 1 for the
time being.)

 

I did however find something interesting though, while
looking at the queries being logged.

 

The slow_query_time is set to 2 (2 seconds i am assuming)
however all the queries being logged states that it ran for 0 seconds.

 

I am busy doing explain plans on some of them now but not
really sure what to look for yet (Rather new to MySQL and hope google will
have some answers J  )

 

 

Thank you

 

 

 

 

From: John Daisley [mailto:mg_s...@hotmail.com] 
Sent: 23 February 2010 10:24 AM
To: machi...@rdc.co.za; mysql@lists.mysql.com
Subject: RE: slow queries not being logged

 



 From: machi...@rdc.co.za
 To: mysql@lists.mysql.com
 Subject: slow queries not being logged
 Date: Tue, 23 Feb 2010 09:59:13 +0200
 
 Good day all
 
 
 
 I hope you can assist me with this one...
 
 
 
 We have a client where the slow query log was disabled.
 

Slow query log is on the server only.  If you are saying you have enabled
the slow query log and the servers query log is empty can you post your
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


 
 
 We noticed that the slow query amount (when viewing global
 status) has skyrocketed during the last 2 weeks going up to over 2 million
 (from 160 million queries).
 
 
 
 We wanted to look at these queries to see if it can be
 optimised to reduce the amount and went through the whole database restart
 routine to enable the slow query log again (they are running version 5.0
so
 had to restart).
 
 
 
 
 
 However, even though the slow query log is enabled, it is
 not logging the queries to the file specified.
 
 
 
 Can someone please assist in why this is not being done? I
 thought that it might be logging to a default filename but there is only
one
 slow queries log file in the directory and it is empty.
 
 
 
 Checking the global status again, it showed 29 000 slow
 queries since this morning (3 hours ago) but nothing in the logs.
 
 
 
 
 
 Your help will be appreciated.
 
 
 
 
 
 Regards
 
 
 

  _  

Do you want a Hotmail account? Sign-up
http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free



Re: slow queries not being logged

2010-02-23 Thread Ananda Kumar
slow query log will also have sql's which are not using indexes(doing full
table scan).
May be those queries with ZERO SECOND run on small table without using
indexes.

regards
anandkl

On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards machi...@rdc.co.zawrote:

 Hi All



I found my problem and this was kind of a blonde moment for
 me...



When configuring the log_slow_queries parameter, it was
 configured as follows:  log_slow_queries=1



This the file being created is called 1 and the 1 does not
 mean it is enabled.



I have fixed this now but need to wait for a gap to reboot
 again to have it set properly. (have to live with the filename 1 for the
 time being.)



I did however find something interesting though, while
 looking at the queries being logged.



The slow_query_time is set to 2 (2 seconds i am assuming)
 however all the queries being logged states that it ran for 0 seconds.



I am busy doing explain plans on some of them now but not
 really sure what to look for yet (Rather new to MySQL and hope google will
 have some answers J  )





 Thank you









 From: John Daisley [mailto:mg_s...@hotmail.com]
 Sent: 23 February 2010 10:24 AM
 To: machi...@rdc.co.za; mysql@lists.mysql.com
 Subject: RE: slow queries not being logged





  From: machi...@rdc.co.za
  To: mysql@lists.mysql.com
  Subject: slow queries not being logged
  Date: Tue, 23 Feb 2010 09:59:13 +0200
 
  Good day all
 
 
 
  I hope you can assist me with this one...
 
 
 
  We have a client where the slow query log was disabled.
 

 Slow query log is on the server only.  If you are saying you have enabled
 the slow query log and the servers query log is empty can you post your
 my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


 
 
  We noticed that the slow query amount (when viewing global
  status) has skyrocketed during the last 2 weeks going up to over 2
 million
  (from 160 million queries).
 
 
 
  We wanted to look at these queries to see if it can be
  optimised to reduce the amount and went through the whole database
 restart
  routine to enable the slow query log again (they are running version 5.0
 so
  had to restart).
 
 
 
 
 
  However, even though the slow query log is enabled, it is
  not logging the queries to the file specified.
 
 
 
  Can someone please assist in why this is not being done? I
  thought that it might be logging to a default filename but there is only
 one
  slow queries log file in the directory and it is empty.
 
 
 
  Checking the global status again, it showed 29 000 slow
  queries since this morning (3 hours ago) but nothing in the logs.
 
 
 
 
 
  Your help will be appreciated.
 
 
 
 
 
  Regards
 
 
 

  _

 Do you want a Hotmail account? Sign-up
 http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free




Re: slow queries not being logged

2010-02-23 Thread Carsten Pedersen

You might want to read the comments to this posting: 

http://www.bitbybit.dk/carsten/blog/?p=116



Several tools/methods for controlling and analyzing the slow query log are

suggested there.



Best,



/ Carsten



On Tue, 23 Feb 2010 14:09:30 +0530, Ananda Kumar anan...@gmail.com

wrote:

 slow query log will also have sql's which are not using indexes(doing

full

 table scan).

 May be those queries with ZERO SECOND run on small table without using

 indexes.

 

 regards

 anandkl

 

 On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards

 machi...@rdc.co.zawrote:

 

 Hi All







I found my problem and this was kind of a blonde moment

for

 me...







When configuring the log_slow_queries parameter, it was

 configured as follows:  log_slow_queries=1







This the file being created is called 1 and the 1 does

not

 mean it is enabled.







I have fixed this now but need to wait for a gap to

reboot

 again to have it set properly. (have to live with the filename 1 for

the

 time being.)







I did however find something interesting though, while

 looking at the queries being logged.







The slow_query_time is set to 2 (2 seconds i am

assuming)

 however all the queries being logged states that it ran for 0 seconds.







I am busy doing explain plans on some of them now but

not

 really sure what to look for yet (Rather new to MySQL and hope google

 will

 have some answers J  )











 Thank you



















 From: John Daisley [mailto:mg_s...@hotmail.com]

 Sent: 23 February 2010 10:24 AM

 To: machi...@rdc.co.za; mysql@lists.mysql.com

 Subject: RE: slow queries not being logged











  From: machi...@rdc.co.za

  To: mysql@lists.mysql.com

  Subject: slow queries not being logged

  Date: Tue, 23 Feb 2010 09:59:13 +0200

 

  Good day all

 

 

 

  I hope you can assist me with this one...

 

 

 

  We have a client where the slow query log was disabled.

 



 Slow query log is on the server only.  If you are saying you have

enabled

 the slow query log and the servers query log is empty can you post your

 my.cnf/my.ini file. Also make sure --long-query-time is set

 appropriately.





 

 

  We noticed that the slow query amount (when viewing global

  status) has skyrocketed during the last 2 weeks going up to over 2

 million

  (from 160 million queries).

 

 

 

  We wanted to look at these queries to see if it can be

  optimised to reduce the amount and went through the whole database

 restart

  routine to enable the slow query log again (they are running version

  5.0

 so

  had to restart).

 

 

 

 

 

  However, even though the slow query log is enabled, it is

  not logging the queries to the file specified.

 

 

 

  Can someone please assist in why this is not being done? I

  thought that it might be logging to a default filename but there is

  only

 one

  slow queries log file in the directory and it is empty.

 

 

 

  Checking the global status again, it showed 29 000 slow

  queries since this morning (3 hours ago) but nothing in the logs.

 

 

 

 

 

  Your help will be appreciated.

 

 

 

 

 

  Regards

 

 

 



  _



 Do you want a Hotmail account? Sign-up

 http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free





 

 

 !DSPAM:451,4b839535858212076517642!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



slow queries not being logged

2010-02-22 Thread Machiel Richards
Good day all

 

I hope you can assist me with this one...

 

We have a client where the slow query log was disabled.

 

We noticed that the slow query amount (when viewing global
status) has skyrocketed during the last 2 weeks going up to over 2 million
(from 160 million queries).

 

We wanted to look at these queries to see if it can be
optimised to reduce the amount and went through the whole database restart
routine to enable the slow query log again (they are running version 5.0 so
had to restart).

 

 

However, even though the slow query log is enabled, it is
not logging the queries to the file specified.

 

Can someone please assist in why this is not being done? I
thought that it might be logging to a default filename but there is only one
slow queries log file in the directory and it is empty.

 

Checking the global status again, it showed 29 000 slow
queries since this morning (3 hours ago) but nothing in the logs.

 

 

Your help will be appreciated.

 

 

Regards

 



Re: logging of BAD queries

2010-02-11 Thread Baron Schwartz
Andy,

On Tue, Feb 9, 2010 at 10:27 AM, andy knasinski a...@nrgsoft.com wrote:
 I've used the general and slow query log in the past, but I am trying to
 track down some queries from a compiled app that never seem to be hitting
 the DB server.

 My guess is that the SQL syntax is bad and never get executed, but I don't
 see any related queries in the general query log. Does the general log
 include invalid SQL?

Yes, it does.  If you're not able to debug the application itself, I
would sniff the TCP traffic.  Use wireshark or mk-query-digest.

Baron

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



logging of BAD queries

2010-02-09 Thread andy knasinski
I've used the general and slow query log in the past, but I am trying  
to track down some queries from a compiled app that never seem to be  
hitting the DB server.


My guess is that the SQL syntax is bad and never get executed, but I  
don't see any related queries in the general query log. Does the  
general log include invalid SQL?


I've also tried to use the driver logging, but on Windows it  
overwrites with the last SQL command so I cannot get a good capture as  
requests are sent to the DB.


DB is MySQL 5.0.x

Thanks
andy


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: logging of BAD queries

2010-02-09 Thread mos

At 09:27 AM 2/9/2010, andy knasinski wrote:

I've used the general and slow query log in the past, but I am trying
to track down some queries from a compiled app that never seem to be
hitting the DB server.

My guess is that the SQL syntax is bad and never get executed, but I
don't see any related queries in the general query log. Does the
general log include invalid SQL?


I don't think it does.



I've also tried to use the driver logging, but on Windows it
overwrites with the last SQL command so I cannot get a good capture as
requests are sent to the DB.

DB is MySQL 5.0.x



 I do something like that in my compiled application. All SQL queries are 
sent to a single procedures and executed there. I trap any errors and log 
the SQL in a table along with the error message. This is useful to 
determine if someone is trying to break into the database (sql injection). 
Having a central procedure to execute all queries is paramount in 
controlling and capturing errors. I can also unplug and plug in a different 
database engine quite easily rather than hunting down all direct calls to 
the database. I also don't have to worry about trapping errors throughout 
the application. It's all done at one central point.


I've been doing it this way for 5 years and would never start a large 
application without it.


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: logging of BAD queries

2010-02-09 Thread andy knasinski
Unfortunately, I'm using a commercial application and trying to debug  
as to why some data does and does not get updated properly.


On Feb 9, 2010, at 2:57 PM, mos wrote:



I do something like that in my compiled application. All SQL queries  
are sent to a single procedures and executed there. I trap any  
errors and log the SQL in a table along with the error message. This  
is useful to determine if someone is trying to break into the  
database (sql injection). Having a central procedure to execute all  
queries is paramount in controlling and capturing errors. I can also  
unplug and plug in a different database engine quite easily rather  
than hunting down all direct calls to the database. I also don't  
have to worry about trapping errors throughout the application. It's  
all done at one central point.


I've been doing it this way for 5 years and would never start a  
large application without it.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: logging of BAD queries

2010-02-09 Thread Kyong Kim
I'm not positive if the general log captures all invalid queries but
it does capture at least some.
I was asked the same question a few months back and checking to make
sure that manually issued invalid queries are logged (IIRC).
Could it be that the queries are never even making it to the database?
Kyong

On Tue, Feb 9, 2010 at 2:05 PM, andy knasinski a...@nrgsoft.com wrote:
 Unfortunately, I'm using a commercial application and trying to debug as to
 why some data does and does not get updated properly.

 On Feb 9, 2010, at 2:57 PM, mos wrote:


 I do something like that in my compiled application. All SQL queries are
 sent to a single procedures and executed there. I trap any errors and log
 the SQL in a table along with the error message. This is useful to determine
 if someone is trying to break into the database (sql injection). Having a
 central procedure to execute all queries is paramount in controlling and
 capturing errors. I can also unplug and plug in a different database engine
 quite easily rather than hunting down all direct calls to the database. I
 also don't have to worry about trapping errors throughout the application.
 It's all done at one central point.

 I've been doing it this way for 5 years and would never start a large
 application without it.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=kykim...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: logging of BAD queries

2010-02-09 Thread Sebastian Mendel

Am 09.02.2010 16:27, schrieb andy knasinski:

I've used the general and slow query log in the past, but I am trying to
track down some queries from a compiled app that never seem to be
hitting the DB server.

My guess is that the SQL syntax is bad and never get executed, but I
don't see any related queries in the general query log. Does the general
log include invalid SQL?

I've also tried to use the driver logging, but on Windows it overwrites
with the last SQL command so I cannot get a good capture as requests are
sent to the DB.

DB is MySQL 5.0.x


you can try MySQL proxy

--
Sebastian Mendel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL University session on February 4: Optimizing Queries with EXPLAIN

2010-02-02 Thread Stefan Hinz
MySQL University: Optimizing Queries with EXPLAIN
http://forge.mysql.com/wiki/Optimizing_Queries_with_Explain

This Thursday (February 4th, 14:00 UTC), Morgan Tocker will talk about
Optimizing Queries with Explain. Morgan was a technical instructor at
MySQL and works for Percona today.

For MySQL University sessions, point your browser to this page:

http://webmeeting.dimdim.com/portal/JoinForm.action?confKey=mysqluniversity

You need a browser with a working Flash plugin. You may register for a
Dimdim account, but you don't have to. (Dimdim is the conferencing
system we're using for MySQL University sessions. It provides integrated
voice streaming, chat, whiteboard, session recording, and more.)

MySQL University is a free educational online program for
engineers/developers. MySQL University sessions are open to anyone, not
just Sun employees. Sessions are recorded (slides and audio), so if you
can't attend the live session you can look at the recording anytime
after the session.

Here's the schedule for the upcoming weeks:

Here's the tentative list of upcoming sessions:

* February 11: MySQL Galera - Multi-Master Replication (Seppo
   Jaakola  Alex Yurchenko)
* February 18: Performance Schema: Instrumenting Code (Marc Alff)
* February 25: Securich - Security Plugin for MySQL (Darren Cassar)
* March  4: MySQL Column Databases (Robin Schumacher)
* March 11: Improving MySQL Full-Text Search (Kristofer Pettersson)

By the way, did I mention that we need more speakers to fill up the 2010
schedule? If you'd like to be a speaker, have a look at this blog post:
http://blogs.sun.com/mysqlf/entry/mysql_university_speakers_wanted1

Cheers,

Stefan
-- 
Stefan Hinz stefan.h...@sun.com, Documentation Manager MySQL

Phone: +49-30-82702940, Fax: +49-30-82702941, http://dev.mysql.com/doc
Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels
Vorsitzender des Aufsichtsrates: Martin Haering

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Techniques for queries on a series?

2009-11-17 Thread Jan Steinman
I often need a pattern where one record refers to the one before it,  
based on the order of some field.


For example, a vehicle log, in which the field odometer is the  
current odometer reading upon an event, such as a trip completion,  
fueling, maintenance, etc. Very often one wants to calculate the  
difference in odometer readings, which means accessing two consecutive  
records, when ordered by odometer. I currently put both readings in  
one record, which seems an unnecessary de-normalization and extra work  
in data entry.


Another example: an amortization database, where the value of the loan  
principle depends on the payment, interest rate, but also the previous  
record's principle. Someone makes a payment on a loan, which needs to  
be entered along with the declining balance, but that depends on the  
balance of the previous record.


Quite often, I see this pattern in time series data. Data is logged  
and time-stamped, and many queries depend on the difference in time- 
stamps between two consecutive records. For example, milk production  
records: with milk goats, if milking is early or late, the amount of  
milk is lower or higher. I need to do an analysis of short-term milk  
production, which means daily production needs to be normalized for  
variations in time, which means I need to refer to time and volume  
deltas from two consecutive records, ordered by time.


Are there some good techniques for dealing with this common pattern in  
SQL? Or do I need to do it all with two queries and a programming  
language?


Pointers to good web references are welcome. I have googled quite a  
bit, and haven't turned up anything apropos.


Thanks for whatever insight you can offer!

 A virus has marked this email as being virus-free! 
 Jan Steinman http://www.VeggieVanGogh.com 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Techniques for queries on a series?

2009-11-17 Thread Peter Brawley

Jan,

 I often need a pattern where one record refers to the one before 
it, based on the order of some field.


Some ideas under Sequences at 
http://www.artfulsoftware.com/infotree/queries.php.


PB



Jan Steinman wrote:
I often need a pattern where one record refers to the one before it, 
based on the order of some field.


For example, a vehicle log, in which the field odometer is the 
current odometer reading upon an event, such as a trip completion, 
fueling, maintenance, etc. Very often one wants to calculate the 
difference in odometer readings, which means accessing two consecutive 
records, when ordered by odometer. I currently put both readings in 
one record, which seems an unnecessary de-normalization and extra work 
in data entry.


Another example: an amortization database, where the value of the loan 
principle depends on the payment, interest rate, but also the previous 
record's principle. Someone makes a payment on a loan, which needs to 
be entered along with the declining balance, but that depends on the 
balance of the previous record.


Quite often, I see this pattern in time series data. Data is logged 
and time-stamped, and many queries depend on the difference in 
time-stamps between two consecutive records. For example, milk 
production records: with milk goats, if milking is early or late, the 
amount of milk is lower or higher. I need to do an analysis of 
short-term milk production, which means daily production needs to be 
normalized for variations in time, which means I need to refer to time 
and volume deltas from two consecutive records, ordered by time.


Are there some good techniques for dealing with this common pattern in 
SQL? Or do I need to do it all with two queries and a programming 
language?


Pointers to good web references are welcome. I have googled quite a 
bit, and haven't turned up anything apropos.


Thanks for whatever insight you can offer!

 A virus has marked this email as being virus-free! 
 Jan Steinman http://www.VeggieVanGogh.com 






No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.425 / Virus Database: 270.14.69/2508 - Release Date: 11/17/09 07:40:00


  


Re: Techniques for queries on a series?

2009-11-17 Thread Jan Steinman


On 17 Nov 09, at 10:41, Peter Brawley wrote:

 I often need a pattern where one record refers to the one before  
it, based on the order of some field.


Some ideas under Sequences at http://www.artfulsoftware.com/infotree/queries.php 
.


Thanks, Peter! What a marvellous resource!


 You know what? What makes our economy grow is energy. And  
Americans are used to going to the gas tank, and when they put that  
hose in their, uh, tank, and when I do it, I wanna get gas out of it.  
And when I turn the light switch on, I want the lights to go on, and I  
don't want somebody to tell me I gotta change my way of living to  
satisfy them. Because this is America, and this is something we've  
worked our way into, and the American people are entitled to it, and  
if we're going improve our standard of living, you have to consume  
more energy. -- Senator Chuck Grassley (R-IA) 

 Jan Steinman, http://www.VeggieVanGogh.com 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log sql queries?

2009-11-15 Thread Sydney Puente
 Ah! Thanks - I see. 5.1.39
Actually I did look i the manual, but failed to note the differences between  
5.1.29  and   5.1.29  
I was using the syntax for  5.1.29.

Now I am using the correct syntax in /etc/my.cnf
$ grep general /etc/my.cnf
general_log=1
general_log_file=/var/log/mysql/mysql.log

Now I get a log file, but no actual sql commands appear within.
$ tail -f mysql.log
/usr/sbin/mysqld, Version: 5.1.39-community-log (MySQL Community Server (GPL)). 
started with:
Tcp port: 3306  Unix socket: /var/lib/mysql/mysql.sock
Time Id CommandArgument


Cant see anything relevant in the manual.

Strange(?)

Syd 
++
Sorry can't remember what version you said you were using; if you have a
version prior to 5.1.29 to log all queries enter the following in the
[mysqld] section of your my.cnf

log = /path/to/logfile/filename.log

Remembering that the path you specify must be writeable by the server. 

If you are on version 5.1.29 or later use

general_log=1
general_log_file=/path/to/logfile/filename.log

If you have a very busy server the general query log will get very very
big very quickly. Its best to only enable the general query log for
identifying problems and performance tuning.

Take a look at the manual(http://dev.mysql.com/doc/refman/5.1/en/)! You
can get answers to questions like this so much quicker from there.

Regards

John Daisley


On Fri, 2009-11-13 at 16:43 +, Sydney Puente wrote:
 OK thanks to some help from this list I now have a blank my.cnf file in /etc
 And I want to set up logging of all sql  queries.
 
 So I have tried:
 
 SET GLOBAL general_log = 'ON';
 
 and/or putting  (only)  /var/log/mysql/mysql.log
 in my.cnf and doing a restart via /etc/init.d
 (have a pid file now -Ta!)
 
 but still no sql commands in the log file.
 
 There are stop/start up commands in /var/log/mysqld.log
 
 but that is about it
 
 Any suggestions?
 
 Syd





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



log sql queries?

2009-11-13 Thread Sydney Puente
OK thanks to some help from this list I now have a blank my.cnf file in /etc
And I want to set up logging of all sql  queries.

So I have tried:

SET GLOBAL general_log = 'ON';

and/or putting  (only)  /var/log/mysql/mysql.log
in my.cnf and doing a restart via /etc/init.d
(have a pid file now -Ta!)

but still no sql commands in the log file.

There are stop/start up commands in /var/log/mysqld.log

but that is about it

Any suggestions?

Syd




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log sql queries?

2009-11-13 Thread John Daisley
Sorry can't remember what version you said you were using; if you have a
version prior to 5.1.29 to log all queries enter the following in the
[mysqld] section of your my.cnf

log = /path/to/logfile/filename.log

Remembering that the path you specify must be writeable by the server. 

If you are on version 5.1.29 or later use

general_log=1
general_log_file=/path/to/logfile/filename.log

If you have a very busy server the general query log will get very very
big very quickly. Its best to only enable the general query log for
identifying problems and performance tuning.

Take a look at the manual(http://dev.mysql.com/doc/refman/5.1/en/)! You
can get answers to questions like this so much quicker from there.

Regards

John Daisley


On Fri, 2009-11-13 at 16:43 +, Sydney Puente wrote:
 OK thanks to some help from this list I now have a blank my.cnf file in /etc
 And I want to set up logging of all sql  queries.
 
 So I have tried:
 
 SET GLOBAL general_log = 'ON';
 
 and/or putting  (only)  /var/log/mysql/mysql.log
 in my.cnf and doing a restart via /etc/init.d
 (have a pid file now -Ta!)
 
 but still no sql commands in the log file.
 
 There are stop/start up commands in /var/log/mysqld.log
 
 but that is about it
 
 Any suggestions?
 
 Syd
 
 
   
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: 2 Queries need to combine into one

2009-11-10 Thread Ananda Kumar
select (SELECT dealer.FIRMID, (COUNT(*) -1) * 25.00 + 270.00 AS FEE
FROM `dealer` `dealer`
 INNER JOIN `branches` `branches` ON
(`branches`.`FIRMID` = `dealer`.`FIRMID`)
WHERE (( `dealer`.`CRD_NUM` = 0 )
  OR ( `dealer`.`CRD_NUM` IS NULL  ))
  AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
  AND ( `branches`.`BRANCH_NUM` = 0 )
  AND ( `branches`.`STATUSID` = 31 )
GROUP BY `dealer`.`FIRMID`
ORDER BY `dealer`.`FILE_NUM`
) + (SELECT dealer.FIRMID, (COUNT(*) * 275.00) AS FEE
FROM `dealer` `dealer`
 INNER JOIN `branches` `branches` ON
(`branches`.`FIRMID` = `dealer`.`FIRMID`)
 INNER JOIN `agentdealer` `agentdealer` ON
(`dealer`.`FIRMID` = `agentdealer`.`FIRMID`)
WHERE (( `dealer`.`CRD_NUM` = 0 )
  OR ( `dealer`.`CRD_NUM` IS NULL  ))
  AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
  AND (branches.BRANCH_NUM= 0)
  AND (branches.STATUSID = 31)
  AND ( `agentdealer`.`STATUSID` = 31 )
GROUP BY `dealer`.`FIRMID`
ORDER BY `dealer`.`FIRMID`
)
On Mon, Nov 9, 2009 at 10:20 PM, Robin Brady rbr...@ssb.state.tx.us wrote:

 I am very new to MySQL and trying to use Navicat Report Builder to format a
 renewal invoice to send to our registrants.  The renewal fees are fixed  for
 each type of registrant but the actual fee is not part of the database and
 must be computed as the report is generated.  As far as I can tell, the
 Report Builder SUM function is fairly basic and can only SUM actual fields
 in the database.  If I can format a query to compute the sum and create a
 data view in the report builder I can put the total for each firm on the
 report.

 I have 2 separate queries that will compute the total renewal fees for
 branches and total renewal fees for an agents but I can't figure out how to
 add these 2 numbers together in the query.

 Here are the 2 queries.  Note that there will always be at least 1 branch
 fee but there may be = 0 agent fees per firm.

 SELECT dealer.FIRMID, (COUNT(*) -1) * 25.00 + 270.00 AS FEE
 FROM `dealer` `dealer`
  INNER JOIN `branches` `branches` ON
 (`branches`.`FIRMID` = `dealer`.`FIRMID`)
 WHERE (( `dealer`.`CRD_NUM` = 0 )
   OR ( `dealer`.`CRD_NUM` IS NULL  ))
   AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
   AND ( `branches`.`BRANCH_NUM` = 0 )
   AND ( `branches`.`STATUSID` = 31 )
 GROUP BY `dealer`.`FIRMID`
 ORDER BY `dealer`.`FILE_NUM`


 SELECT dealer.FIRMID, (COUNT(*) * 275.00) AS FEE
 FROM `dealer` `dealer`
  INNER JOIN `branches` `branches` ON
 (`branches`.`FIRMID` = `dealer`.`FIRMID`)
  INNER JOIN `agentdealer` `agentdealer` ON
 (`dealer`.`FIRMID` = `agentdealer`.`FIRMID`)
 WHERE (( `dealer`.`CRD_NUM` = 0 )
   OR ( `dealer`.`CRD_NUM` IS NULL  ))
   AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
   AND (branches.BRANCH_NUM= 0)
   AND (branches.STATUSID = 31)
   AND ( `agentdealer`.`STATUSID` = 31 )
 GROUP BY `dealer`.`FIRMID`
 ORDER BY `dealer`.`FIRMID`



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com




Re: 2 Queries need to combine into one

2009-11-10 Thread Robin Brady
Initially I received the operand should only have one column so I removed the 
dealer.FIRMID from the select statement and then the query just returns NULL.

Each query works fine on its own but I can't seem to combine it so that is 
gives me the total of the fees.

Robin

 Ananda Kumar anan...@gmail.com 11/10/2009 3:54 AM 
select (SELECT dealer.FIRMID, (COUNT(*) -1) * 25.00 + 270.00 AS FEE
FROM `dealer` `dealer`
 INNER JOIN `branches` `branches` ON
(`branches`.`FIRMID` = `dealer`.`FIRMID`)
WHERE (( `dealer`.`CRD_NUM` = 0 )
  OR ( `dealer`.`CRD_NUM` IS NULL  ))
  AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
  AND ( `branches`.`BRANCH_NUM` = 0 )
  AND ( `branches`.`STATUSID` = 31 )
GROUP BY `dealer`.`FIRMID`
ORDER BY `dealer`.`FILE_NUM`
) + (SELECT dealer.FIRMID, (COUNT(*) * 275.00) AS FEE
FROM `dealer` `dealer`
 INNER JOIN `branches` `branches` ON
(`branches`.`FIRMID` = `dealer`.`FIRMID`)
 INNER JOIN `agentdealer` `agentdealer` ON
(`dealer`.`FIRMID` = `agentdealer`.`FIRMID`)
WHERE (( `dealer`.`CRD_NUM` = 0 )
  OR ( `dealer`.`CRD_NUM` IS NULL  ))
  AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
  AND (branches.BRANCH_NUM= 0)
  AND (branches.STATUSID = 31)
  AND ( `agentdealer`.`STATUSID` = 31 )
GROUP BY `dealer`.`FIRMID`
ORDER BY `dealer`.`FIRMID`
)
On Mon, Nov 9, 2009 at 10:20 PM, Robin Brady rbr...@ssb.state.tx.us wrote:

 I am very new to MySQL and trying to use Navicat Report Builder to format a
 renewal invoice to send to our registrants.  The renewal fees are fixed  for
 each type of registrant but the actual fee is not part of the database and
 must be computed as the report is generated.  As far as I can tell, the
 Report Builder SUM function is fairly basic and can only SUM actual fields
 in the database.  If I can format a query to compute the sum and create a
 data view in the report builder I can put the total for each firm on the
 report.

 I have 2 separate queries that will compute the total renewal fees for
 branches and total renewal fees for an agents but I can't figure out how to
 add these 2 numbers together in the query.

 Here are the 2 queries.  Note that there will always be at least 1 branch
 fee but there may be = 0 agent fees per firm.

 SELECT dealer.FIRMID, (COUNT(*) -1) * 25.00 + 270.00 AS FEE
 FROM `dealer` `dealer`
  INNER JOIN `branches` `branches` ON
 (`branches`.`FIRMID` = `dealer`.`FIRMID`)
 WHERE (( `dealer`.`CRD_NUM` = 0 )
   OR ( `dealer`.`CRD_NUM` IS NULL  ))
   AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
   AND ( `branches`.`BRANCH_NUM` = 0 )
   AND ( `branches`.`STATUSID` = 31 )
 GROUP BY `dealer`.`FIRMID`
 ORDER BY `dealer`.`FILE_NUM`


 SELECT dealer.FIRMID, (COUNT(*) * 275.00) AS FEE
 FROM `dealer` `dealer`
  INNER JOIN `branches` `branches` ON
 (`branches`.`FIRMID` = `dealer`.`FIRMID`)
  INNER JOIN `agentdealer` `agentdealer` ON
 (`dealer`.`FIRMID` = `agentdealer`.`FIRMID`)
 WHERE (( `dealer`.`CRD_NUM` = 0 )
   OR ( `dealer`.`CRD_NUM` IS NULL  ))
   AND ( `dealer`.`LIC_TYPE` IN (1,2,3,7,9) )
   AND (branches.BRANCH_NUM= 0)
   AND (branches.STATUSID = 31)
   AND ( `agentdealer`.`STATUSID` = 31 )
 GROUP BY `dealer`.`FIRMID`
 ORDER BY `dealer`.`FIRMID`



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql 
 To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



  1   2   3   4   5   6   7   8   9   10   >