by repetitive queries
(ie like the side products table that appears on every web page).
I'm pretty sure I cached the site pretty well, but want to make sure
that I didn't miss anything.
Is there some sort of tool that allows me to check for repetitive
queries?
likely nobody knows what you
at 10:37 PM, Steve Quezadas st...@modelprinting.com
wrote:
I want to make sure my caching system is working properly and I want
to make sure my mysql server isnt being held up by repetitive queries
(ie like the side products table that appears on every web page).
I'm pretty sure I cached
mysql server isnt being held up by repetitive queries
(ie like the side products table that appears on every web page).
I'm pretty sure I cached the site pretty well, but want to make sure
that I didn't miss anything.
Is there some sort of tool that allows me to check for repetitive queries
Am 18.05.2015 um 23:37 schrieb Steve Quezadas:
I want to make sure my caching system is working properly and I want
to make sure my mysql server isnt being held up by repetitive queries
(ie like the side products table that appears on every web page).
I'm pretty sure I cached the site pretty
I want to make sure my caching system is working properly and I want
to make sure my mysql server isnt being held up by repetitive queries
(ie like the side products table that appears on every web page).
I'm pretty sure I cached the site pretty well, but want to make sure
that I didn't miss
Hello Surya,
Part of the problem may be that you are so focused on the details that
might have lost sight of the purpose.
On 7/12/2014 8:24 AM, Surya Savarika wrote:
Hi,
I have two query series that I wonder whether they can be compacted
into a single query:
FIRST QUERY SERIES
Hi,
I have two query series that I wonder whether they can be compacted
into a single query:
FIRST QUERY SERIES
cursor.execute(select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
where b.BooksDataID!=b.BooksDataID2 and
Hi list,
I have some problems with INSERT INTO and UPDATE queries on a big table.
Let me put the code and explain it ...
I have copied the create code of the table. This table has more than
1500 rows.
Create Table: CREATE TABLE `radacct` (
`RadAcctId` bigint(21) NOT NULL AUTO_INCREMENT
- Original Message -
From: Johan De Meersman vegiv...@tuxera.be
Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
In any case, this is nothing that can be fixed on the database level.
I may or may not have to swallow that :-p
I've been hammering a munin plugin that graphs
On 6/3/2014 4:47 PM, Johan De Meersman wrote:
- Original Message -
From: Johan De Meersman vegiv...@tuxera.be
Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
In any case, this is nothing that can be fixed on the database level.
I may or may not have to swallow that :-p
I've
to fix your problem?
I was about to comment that it looks like queries generated by an ORM or
connector. It looks like from your version string you have an MySQL
enterprise, may I suggest creating a ticket with support?
Regarding your most recent reply:
All the SHOW FULL COLUMN queries that we
Hi All
I am no expert with mysql and databases. Hence seeking out some help on
this forum.
Basically i got a query dump of my application during its operation. I
had collected the queries for about 4 hours. Ran some scripts on the
number of queries being sent to the databases.
The query
Am 02.06.2014 15:35, schrieb Jatin Davey:
I am no expert with mysql and databases. Hence seeking out some help on this
forum.
Basically i got a query dump of my application during its operation. I had
collected the queries for about 4 hours.
Ran some scripts on the number of queries
On 6/2/2014 7:18 PM, Reindl Harald wrote:
Am 02.06.2014 15:35, schrieb Jatin Davey:
I am no expert with mysql and databases. Hence seeking out some help on this
forum.
Basically i got a query dump of my application during its operation. I had
collected the queries for about 4 hours.
Ran
The advice to 'avoid LIKE in general' is a little strong. LIKE is
very useful and does not always cause inefficient queries, although
the possibility is there.
However, there is one form which must be avoided at all costs: the one
where the glob-text matcher is the first character in that string
a query dump of my application during its operation. I
had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to the databases.
The query file was a whopping 4 GB is size. Upon analyzing the queries i
found that there were a total of 30
million queries
some help on this forum.
Basically i got a query dump of my application during its
operation. I had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to
the databases.
The query file
All the SHOW FULL COLUMN queries that we do on the respective tables
are very small tables. They hardly cross 50 rows. Hence that is the
reason whenever these queries are made i can see high cpu usage in
%user_time. If it were very large tables then the cpu would be spending
lot of time
its
operation. I had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to
the databases.
The query file was a whopping 4 GB is size. Upon analyzing
the queries i found that there were a total of 30
to comment that it looks like queries generated by an ORM or
connector. It looks like from your version string you have an MySQL
enterprise, may I suggest creating a ticket with support?
Regarding your most recent reply:
All the SHOW FULL COLUMN queries that we do on the respective tables are
very
- Original Message -
From: Jatin Davey jasho...@cisco.com
Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
Certain part of our code uses DataNucleas while other parts of the code
A data persistence product... there's your problem.
Persisting objects into a relational database
-Original Message-
From: Vikas Shukla [mailto:myfriendvi...@gmail.com]
Sent: Thursday, May 30, 2013 7:19 PM
To: Robinson, Eric; mysql@lists.mysql.com
Subject: RE: Are There Slow Queries that Don't Show in the
Slow Query Logs?
Hi,
No, it does not represents the time from
seconds to execute.
Sent from my Windows Phone From: Robinson, Eric
Sent: 31-05-2013 03:48
To: mysql@lists.mysql.com
Subject: Are There Slow Queries that Don't Show in the Slow Query Logs?
As everyone knows, with MyISAM, queries and inserts can lock tables
and force other queries to wait in a queue. When
Richard, there is more to a system than number of queries.
Please post these in a new thread on http://forums.mysql.com/list.php?24 :
SHOW GLOBAL STATUS;
SHOW VARIABLES;
Ram size
I will do some analysis and provide my opinion.
-Original Message-
From: Manuel Arostegui
I am looking to spec out hardware for a new database server. I figured
a good starting point would be to find out how much usage my current
server is getting. It just a local machine that runs mysql and is
queried by a few users here in the office. Is there a way that mysql
can tell me info about
2013/4/4 Richard Reina gatorre...@gmail.com
I am looking to spec out hardware for a new database server. I figured
a good starting point would be to find out how much usage my current
server is getting. It just a local machine that runs mysql and is
queried by a few users here in the office.
2013/04/04 22:40 +0200, Manuel Arostegui
You can start with show innodb status;
It is now
show engine innodb status
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
2013/4/4 h...@tbbs.net
2013/04/04 22:40 +0200, Manuel Arostegui
You can start with show innodb status;
It is now
show engine innodb status
Yep, sorry, not used to it just yet :-)
--
Manuel Aróstegui
Systems Team
tuenti.com
. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other case inserts are
around 800 per second.
Our hardware is not optimized for database server, but I don’t have other
choice. It is mostly a desktop computer
Intel core i5, windows 32 bits, 3GB RAM, one disk
(with substitutions filled in)
-Original Message-
From: Andrés Tello [mailto:mr.crip...@gmail.com]
Sent: Tuesday, October 09, 2012 7:04 AM
To: Adrián Espinosa Moreno
Cc: mysql@lists.mysql.com
Subject: Re: Slow queries / inserts InnoDB
You are forcing mysql to do full table scans with the substr
other table per ISN.
e. Here is the problem. If I have a few files to process (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts
in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other case inserts are
around 800 per second
hi,
I am biased on mysql, and hence i am asking this on mysql forum first.
I am designing a solution which will need me to import from CSV, i am using
my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
an hour, for 10 hours a day.
The Queries will mainly be update
, and hence i am asking this on mysql forum first.
I am designing a solution which will need me to import from CSV, i am using
my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
an hour, for 10 hours a day.
The Queries will mainly be update but select and insert also at times
/ update queries to execute
hi,
I am biased on mysql, and hence i am asking this on mysql forum first.
I am designing a solution which will need me to import from CSV, i am
using my JAVA code to parse. CSV file has 500K rows, and i need to do
it thrice an hour, for 10 hours a day.
The Queries
2012/06/15 18:14 +0900, Tsubasa Tanaka
try to use `LOAD DATA INFILE' to import from CSV file.
http://dev.mysql.com/doc/refman/5.5/en/load-data.html
Try is the operative word: MySQL s character format is _like_ CSV, but not
the same. The treatment of NULL is doubtless the biggest
are representations of queries
themselves. The guy who wrote
the app chose to do updates and joins against the views instead of against the
underlying tables themselves.
I've tuned to meet the gross memory requirements and mysqltuner.pl is saying
that 45% of the joins are without indexes
of
queries themselves. The guy who wrote
the app chose to do updates and joins against the views instead of against
the underlying tables themselves.
I've tuned to meet the gross memory requirements and mysqltuner.pl is
saying that 45% of the joins are without indexes. With the slow query logs
I need two fields from two different tables. I could either run two
queries, or a single INNER JOIN query:
$r1=mysql_query(SELECT fruit FROM fruits WHERE userid = 1);
$r2=mysql_query(SELECT beer FROM beers WHERE userid = 1);
--or--
$r=mysql_query(SELECT fruits.fruit, beers.beer FROM
mp_gamerecord
where (gmtdate date_sub(current_timestamp(),interval 90 day))
and (player1='13213' or player2='13213' or player3='13213' or player4='13213'
or player5='13213' or player6='13213')
group by variation limit 3)
ie: the same two queries shows using no indexes on the first half
I was wondering if any one could point out potential problems with the
following query or if there was a better alternative
From a list of users I want to return all who don't have all the specified
user_profile options or those who do not have at least one preference set to
1. The following
Hi all,
Hope this question is appropriate here :-).
I've got 4 queries:
$q1=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1);
$q2=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='2' ORDER BY RAND() LIMIT 1);
$q3
At 05:39 PM 2/13/2011, Andre Polykanine wrote:
Hi all,
Hope this question is appropriate here :-).
I've got 4 queries:
$q1=mysql_query(SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1);
$q2=mysql_query(SELECT *FROM`CandidateQuestions
the big tab-delimited
file that will still be INSERTed in to their DB line by line. But I'd like to
be able to select from the new data as it comes in, once it's been given a new
number in the Idx field.
Is there any way to run a row of data through SELECT queries as it is being
INSERTed
]
Sent: Monday, November 08, 2010 10:18 AM
To: mysql@lists.mysql.com
Subject: Running Queries When INSERTing Data?
I'm redesigning some software that's been in use since 2002. I'll be working
with databases that will start small and grow along the way.
In the old format, data would come to us in mega
that take just as long as any other queries? Or will it be speeded
up because all the matching records would be adjacent to each other -- like all
at the end?
Also, if you're parsing files into tab delimited format, you don't need to
write a separate parser to insert rows line by line. MySQL
But won't that take just as long as any other queries? Or will it be
speeded up because all the matching records would be adjacent to each other
-- like all at the end?
You can order the result data set by timestamp in descending order, so the
latest will come up first, i.e., LIFO
If you are selecting records within a certain time range that is a subset of
the entire set of data, then indexes which use the timestamp column will be
fine.
More generally: create appropriate indexes to optimize queries.
Although typically, you should design the database to be correct first
All,
Is there a mysql configuration to kill queries that have been locked for quite
some time. If there's none what is an alternative approach to kill these locked
queries and what is the root cause of it?
Thanks,
Mon
The root cause is another query that has tables locked that your locked
queries want. Behind that may be, for example, an inefficient but
often-executed query, high I/O concurrency that has a cumulative slowing
effect, or maybe simply a long-running update that might be better scheduled
during
Hi Mon,
Killing locked queries is not the first step in database tuning.
Queries locked for a long time usually depend on slow updates that lock
other updates or selects,
this happen on MyISAM (or table level locking engines).
If you are really sure you want and can without problems kill
queries
Hi Mon,
Killing locked queries is not the first step in database tuning.
Queries locked for a long time usually depend on slow updates that lock other
updates or selects,
this happen on MyISAM (or table level locking engines).
If you are really sure you want and can without problems kill
On Thu, Oct 14, 2010 at 9:19 AM, monloi perez mlp_fol...@yahoo.com wrote:
Does this happen if your table is InnoDB?
That depends on the type of lock. If no lock type is specified, InnDB will
prefer row locks, while MyISAM will do table locks.
That may help, unless all your queries are trying
On Thu, Oct 14, 2010 at 3:28 AM, Johan De Meersman vegiv...@tuxera.be wrote:
That depends on the type of lock. If no lock type is specified, InnDB will
prefer row locks, while MyISAM will do table locks.
That may help, unless all your queries are trying to access the same rows
anyway
Raj Shekhar writes:
One option here might be to use mysql proxy as a man-in-the-middle and
filter out unwanted queries...
This seems more or less the same as what I'm doing now with php.
The same question applies there - what would you look for in your
filter?
--
MySQL General Mailing List
proxy as a man-in-the-middle and
filter out unwanted queries. You can find an example on how to do this
with mysql proxy on the mysql forge wiki
http://forge.mysql.com/tools/tool.php?id=108 (more stuff
http://forge.mysql.com/tools/search.php?t=tagk=mysqlproxy)
(in case you do not know mysql proxy
in answers for other RDBMS's,
and I imagine that details of implementation may matter, but my
immediate primary interest is mysql used from php.
I want to allow web users to make a very wide variety of queries, but
limited to queries (no updates, redefinitions, etc), and limited to a
fixed set
from php.
I want to allow web users to make a very wide variety of queries, but
limited to queries (no updates, redefinitions, etc), and limited to a
fixed set of tables - let's suppose one table with no joins, and
perhaps a few other restrictions.
I propose to send queries of the following form
Adam Alkins writes:
Sounds like you just want to GRANT access to specific tables (and with
limited commands), which is exactly what MySQL's privilege system does.
How about this part?
Finally, suppose I want to limit access to the table to the rows
where col1=value1. If I just add that
MySQL doesn't have row level permissions, but this is what VIEWS are for. If
you only want access to specific rows, create a view with that subset of
data. You can create a function (privilege bound) to create the view to make
this more dynamic.
If you want direct access to the database, then you
-Original Message-
From: Don Cohen [mailto:don-mysq...@isis.cs3-inc.com]
The http request I have in mind will be something like
https://server.foo.com?user=johnpassword=wxyz;...
and the resulting query something like
select ... from table where user=john and ...
(I will first
or whatever -- it will be far more robust
than anything you can write yourself.
In this case there may be a lot of users but the queries are likely to
be written by a small number.
If you're trying to do some reports, then just code up the reports and
use select boxes for the options you want
-- it will be far more
robust
than anything you can write yourself.
In this case there may be a lot of users but the queries are likely to
be written by a small number.
If you're trying to do some reports, then just code up the reports
and
use select boxes for the options you want someone
-Original Message-
From: Don Cohen [mailto:don-mysq...@isis.cs3-inc.com]
Sent: Wednesday, June 16, 2010 2:48 PM
To: Daevid Vincent
Cc: mysql@lists.mysql.com
Subject: RE: opening a server to generalized queries but not too far
Daevid Vincent writes:
For the love of God
see ways in which it's better than all of them.
So far
manipulate. Why not provide daily SQL dumps of their normalized
data to your users and let them run their reports -- if they're
trying to run SQL queries themselves?
First, why do you assume these are daily reports
Can somebody help me with this?
Thanks!
On Thu, May 6, 2010 at 10:39 AM, Darvin Denmian
darvin.denm...@gmail.com wrote:
Hello,
I've activated the query_cache in Mysql with the variable
query_cache_limit value to 1 MB.
My question is:
How to know what queries wasn't cached because
What queries, precisely, I can't tell you, but you can have a good idea
about how your cache performs using the stuff in show global variables;
and the online manuals about what it all means :)
Look at 'show global variables like %qcache%', for a start.
On Fri, May 7, 2010 at 2:22 PM, Darvin
Can't get slow querys to log. Does this not work in myisam?
*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*
restarted mysqld - no log.
Created in file in /var/log/mysql/
*snip*
-rwxr--r-- 1 mysql mysql 0 May 7 10:33 mysql-slow.log
*snip*
still
At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:
Can't get slow querys to log. Does this not work in myisam?
Sure it does. Have you tried:
slow_query_time = 1
Mike
*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*
restarted mysqld - no log
-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*
restarted mysqld - no log.
Created in file in /var/log/mysql/
*snip*
-rwxr--r-- 1 mysql mysql 0 May 7 10:33 mysql-slow.log
*snip*
still not writing to the file
I've read
http://dev.mysql.com/doc/refman/5.0/en/slow-query
Hello Stephen,
Did u try this ??
mysql show global variables like '%log_output%';
+---+---+
| Variable_name | Value |
+---+---+
| log_output| FILE |
+---+---+
If only the log_output is FILE, then the slow queries will get logged in the
log
Hello,
I've activated the query_cache in Mysql with the variable
query_cache_limit value to 1 MB.
My question is:
How to know what queries wasn't cached because they have exceeded the
value of query_cache_limit?
**Sorry for my Brazilian Englihs :(
Thanks!
--
MySQL General Mailing List
Hi All
How can I get MySQL to only 'log-slow-queries' on specific databases instead
of globally?
--
Ramesh
Hi Ramesh,
As of my knowledge we can only enable slow query log globally
Regards,
Aravinth
On Mon, Apr 12, 2010 at 4:01 PM, RaMeSh rames...@gmail.com wrote:
Hi All
How can I get MySQL to only 'log-slow-queries' on specific databases
instead
of globally?
--
Ramesh
to
any groups then it doesn't matter what groups the User is in. Currently
I use two queries to implement these rules. If the Count on the first
query is 0, they access is granted, if not I execute the second query
and if the count on it is greater than 0, access is granted.
SELECT COUNT
Hello,
Everytime i run a mysqldump (mysql-server-5.0.77) all the other
legitimate queries that are ocurring at that time pretty much sleep
and build up in the processlist untill I either stop the dump or wait
for it finish. The moment i do either one i can have about 8-15
queries waiting they all
On Mon, March 22, 2010 11:08, Andres Salazar wrote:
Hello,
Everytime i run a mysqldump (mysql-server-5.0.77) all the other
legitimate queries that are ocurring at that time pretty much sleep
and build up in the processlist untill I either stop the dump or wait
for it finish. The moment i do
--single-transaction since it avoids read locks (according to the man
pages.
Thats great however ... this type of result was not being exibited
some months ago.. i know the database was grown . It has also happened
that some big queries done against it also cause the same issue. I
think
Dear MySQL forum.
I have performance problems when using left join x combined with
where x.y is null, in particularily when combining three tables this
way.
Please contact me by e-mail if you are familiar with these issues and
know how to eliminate slow queries.
I would really appreciate your
2010/3/19 Olav Mørkrid olav.mork...@gmail.com
Dear MySQL forum.
I have performance problems when using left join x combined with
where x.y is null, in particularily when combining three tables this
way.
With a left join, particularly when you're using *is (not) null*, you can't
use index
From: machi...@rdc.co.za
To: mysql@lists.mysql.com
Subject: slow queries not being logged
Date: Tue, 23 Feb 2010 09:59:13 +0200
Good day all
I hope you can assist me with this one...
We have a client where the slow query log was disabled
it is enabled.
I have fixed this now but need to wait for a gap to reboot
again to have it set properly. (have to live with the filename 1 for the
time being.)
I did however find something interesting though, while
looking at the queries being logged
slow query log will also have sql's which are not using indexes(doing full
table scan).
May be those queries with ZERO SECOND run on small table without using
indexes.
regards
anandkl
On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards machi...@rdc.co.zawrote:
Hi All
I found
:
slow query log will also have sql's which are not using indexes(doing
full
table scan).
May be those queries with ZERO SECOND run on small table without using
indexes.
regards
anandkl
On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards
machi...@rdc.co.zawrote:
Hi All
million
(from 160 million queries).
We wanted to look at these queries to see if it can be
optimised to reduce the amount and went through the whole database restart
routine to enable the slow query log again (they are running version 5.0 so
had to restart
Andy,
On Tue, Feb 9, 2010 at 10:27 AM, andy knasinski a...@nrgsoft.com wrote:
I've used the general and slow query log in the past, but I am trying to
track down some queries from a compiled app that never seem to be hitting
the DB server.
My guess is that the SQL syntax is bad and never get
I've used the general and slow query log in the past, but I am trying
to track down some queries from a compiled app that never seem to be
hitting the DB server.
My guess is that the SQL syntax is bad and never get executed, but I
don't see any related queries in the general query log
At 09:27 AM 2/9/2010, andy knasinski wrote:
I've used the general and slow query log in the past, but I am trying
to track down some queries from a compiled app that never seem to be
hitting the DB server.
My guess is that the SQL syntax is bad and never get executed, but I
don't see any
Unfortunately, I'm using a commercial application and trying to debug
as to why some data does and does not get updated properly.
On Feb 9, 2010, at 2:57 PM, mos wrote:
I do something like that in my compiled application. All SQL queries
are sent to a single procedures and executed
I'm not positive if the general log captures all invalid queries but
it does capture at least some.
I was asked the same question a few months back and checking to make
sure that manually issued invalid queries are logged (IIRC).
Could it be that the queries are never even making
Am 09.02.2010 16:27, schrieb andy knasinski:
I've used the general and slow query log in the past, but I am trying to
track down some queries from a compiled app that never seem to be
hitting the DB server.
My guess is that the SQL syntax is bad and never get executed, but I
don't see any
MySQL University: Optimizing Queries with EXPLAIN
http://forge.mysql.com/wiki/Optimizing_Queries_with_Explain
This Thursday (February 4th, 14:00 UTC), Morgan Tocker will talk about
Optimizing Queries with Explain. Morgan was a technical instructor at
MySQL and works for Percona today.
For MySQL
in time series data. Data is logged
and time-stamped, and many queries depend on the difference in time-
stamps between two consecutive records. For example, milk production
records: with milk goats, if milking is early or late, the amount of
milk is lower or higher. I need to do an analysis
the previous
record's principle. Someone makes a payment on a loan, which needs to
be entered along with the declining balance, but that depends on the
balance of the previous record.
Quite often, I see this pattern in time series data. Data is logged
and time-stamped, and many queries depend
On 17 Nov 09, at 10:41, Peter Brawley wrote:
I often need a pattern where one record refers to the one before
it, based on the order of some field.
Some ideas under Sequences at http://www.artfulsoftware.com/infotree/queries.php
.
Thanks, Peter! What a marvellous resource!
You
relevant in the manual.
Strange(?)
Syd
++
Sorry can't remember what version you said you were using; if you have a
version prior to 5.1.29 to log all queries enter the following in the
[mysqld] section of your my.cnf
log = /path/to/logfile/filename.log
Remembering that the path
OK thanks to some help from this list I now have a blank my.cnf file in /etc
And I want to set up logging of all sql queries.
So I have tried:
SET GLOBAL general_log = 'ON';
and/or putting (only) /var/log/mysql/mysql.log
in my.cnf and doing a restart via /etc/init.d
(have a pid file now -Ta
Sorry can't remember what version you said you were using; if you have a
version prior to 5.1.29 to log all queries enter the following in the
[mysqld] section of your my.cnf
log = /path/to/logfile/filename.log
Remembering that the path you specify must be writeable by the server.
If you
to compute the sum and create a
data view in the report builder I can put the total for each firm on the
report.
I have 2 separate queries that will compute the total renewal fees for
branches and total renewal fees for an agents but I can't figure out how to
add these 2 numbers together
. If I can format a query to compute the sum and create a
data view in the report builder I can put the total for each firm on the
report.
I have 2 separate queries that will compute the total renewal fees for
branches and total renewal fees for an agents but I can't figure out how to
add
1 - 100 of 1256 matches
Mail list logo