hi everybody..
would you know how to recover from such a problem:
InnoDB: ok header, but checksum field contains 792537472,
should be 1776874443
2016-04-04 12:41:15 140333716928640 [ERROR] InnoDB: Redo log
crypto: failed to decrypt log block. Reason could be that
requested key version
Thank you for answer. The problem is that I wrote in previous message
that there is no sql backup just the files for binary backup. Hardware
we are using is a simple laptop with Windows 7 that runs 5.1 server in
case the originally installed files are in use. It runs an 5.5 server
paralelly as
folders with all .frm files ib_logfile0, ib_logfile1 and ibdata1 as
well. Trying to start mysql service log says the following:
50805 16:58:28 [Note] Plugin 'FEDERATED' is disabled.
150805 16:58:28 InnoDB: Initializing buffer pool, size = 47.0M
150805 16:58:28 InnoDB: Completed initialization
Am 05.08.2015 um 17:06 schrieb Csepregi Árpád:
150805 17:02:31 InnoDB: Page dump in ascii and hex (16384 bytes):
hex...
150805 17:02:31 InnoDB: Page checksum 1094951825, prior-to-4.0.14-form
checksum 1449969277
InnoDB: stored checksum 1467223489, prior-to-4.0.14-form stored checksum
87759728
Got a very strange situation, where I receive two similar DELETE
statement in the same binary log position, due to which replication slave
is stopped due to following error:
Could not execute DELETE rows event on table db1.xyz.; Can't find record in
'xyz' , error code:1032.
Following entry
Hi all.
I am a newbie to MySQL, and have been going through several online
resources.
I usually come across the terms - flushing and syncing the log-buffer.
In particular, these two terms hold great significance while selecting the
value of
innodb_flush_log_at_trx_commithttp://dev.mysql.com/doc
Am 17.04.2014 10:37, schrieb Ajay Garg:
I am a newbie to MySQL, and have been going through several online
resources.
I usually come across the terms - flushing and syncing the log-buffer.
In particular, these two terms hold great significance while selecting the
value
been going through several online
resources.
I usually come across the terms - flushing and syncing the
log-buffer.
In particular, these two terms hold great significance while selecting
the
value of innodb_flush_log_at_trx_commit
http://dev.mysql.com/doc/refman/4.1/en/innodb
been going through several online
resources.
I usually come across the terms - flushing and syncing the
log-buffer.
In particular, these two terms hold great significance while selecting
the
value of
innodb_flush_log_at_trx_commithttp://dev.mysql.com/doc
...@thelounge.netmailto:
h.rei...@thelounge.net wrote:
Am 17.04.2014 10:37, schrieb Ajay Garg:
I am a newbie to MySQL, and have been going through several online
resources.
I usually come across the terms - flushing and syncing the
log-buffer.
In particular, these two
2014-04-17 11:11 GMT+02:00 Ajay Garg ajaygargn...@gmail.com:
On Thu, Apr 17, 2014 at 2:28 PM, Reindl Harald h.rei...@thelounge.net
wrote:
Am 17.04.2014 10:55, schrieb Ajay Garg:
I do understand the meaning of Unix sync function.
So, you mean to say that flushing and syncing are
On Thu, Apr 17, 2014 at 3:03 PM, Manuel Arostegui man...@tuenti.com wrote:
2014-04-17 11:11 GMT+02:00 Ajay Garg ajaygargn...@gmail.com:
On Thu, Apr 17, 2014 at 2:28 PM, Reindl Harald h.rei...@thelounge.net
wrote:
Am 17.04.2014 10:55, schrieb Ajay Garg:
I do understand the meaning
.
- setting that was changed is : log_bin = new directory
- old binary logs were moved to the new directory after shutting
down the database
- database started up and continued as normal, however stopped at
the last binary log when it filled up and complained about a corrupted
-- I don't think anything relevant has changed during 4.0 thru 5.6.
-Original Message-
From: Machiel Richards - Gmail [mailto:machiel.richa...@gmail.com]
Sent: Wednesday, July 03, 2013 3:20 AM
To: mysql list
Subject: Master not creating new binary log.
Hi all
I hope all
binary log when it filled up and complained about a corrupted binary
log.
- a flush logs and reset master was done and a new binary log was
created mysql-bin.1
- however same thing happening here, the binlog file fills up to 100Mb
as configured, then stops without creating a new binary log
.
However, the moment the file reached the file size of 100Mb, it
does not go on to create a new binlog file called mysql-bin.2 and the
replication fails stating that it is unable to read the binary log file.
Thus far we have done a flush logs and reset master , but the
same
Am 17.06.2013 13:11, schrieb Mihamina Rakotomandimby:
Say the binary log file (on the master) has reached its maximum
size, so that it has to switch to a +1 binary log file: does he
inform the SLAVE of that switch so that the SLAVE updates its
information about the MASTER status?
The master
Hi all,
Given a MASTER and a SLAVE.
When launching the SLAVE, it knows about the binary log file used by the
MASTER and the position in that log file.
Say the binary log file (on the master) has reached its maximum size, so
that it has to switch to a +1 binary log file: does he inform
On 2013-06-17 14:43, Denis Jedig wrote:
Say the binary log file (on the master) has reached its maximum
size, so that it has to switch to a +1 binary log file: does he
inform the SLAVE of that switch so that the SLAVE updates its
information about the MASTER status?
The master does not inform
a
brief delay.
-Original Message-
From: Mihamina Rakotomandimby [mailto:miham...@rktmb.org]
Sent: Monday, June 17, 2013 5:35 AM
To: mysql@lists.mysql.com
Subject: Re: SLAVE aware of binary log file switch?
On 2013-06-17 14:43, Denis Jedig wrote:
Say the binary log file
2013/04/05 11:16 +0200, Johan De Meersman
Half and half - rename the file, then issue flush logs in mysql to close and
reopen the logs, which will cause a new log with the configured name to be
created.
That being said, I'm not much aware of Windows' idiosyncracies - I hope the
damn thing
- Original Message -
From: h...@tbbs.net
Subject: Re: error-log aging
man logrotate
Not Unix!
So get unix :-)
In any case, I take this to mean that this is not done within MySQL,
right?
Half and half - rename the file, then issue flush logs in mysql to close and
reopen
Am 05.04.2013 11:16, schrieb Johan De Meersman:
- Original Message -
From: h...@tbbs.net
Subject: Re: error-log aging
man logrotate
Not Unix!
So get unix :-)
In any case, I take this to mean that this is not done within MySQL,
right?
Half and half - rename the file
Am 04.04.2013 23:08, schrieb h...@tbbs.net:
Is there somewhere within MySQL means of aging the error log, that it not
indefinitly grow big, or is that done through the OS and filesystem on which
mysqld runs?
man logrotate
signature.asc
Description: OpenPGP digital signature
Is there somewhere within MySQL means of aging the error log, that it not
indefinitly grow big, or is that done through the OS and filesystem on which
mysqld runs?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
2013/04/04 23:18 +0200, Reindl Harald
Is there somewhere within MySQL means of aging the error log, that it not
indefinitly grow big, or is that done through the OS and filesystem on which
mysqld runs?
man logrotate
Not Unix!
In any case, I take this to mean that this is not done within
2013/2/3 Larry Martell larry.mart...@gmail.com
We also ended up dropping the database and restoring from dumps.
However all recent dumps ended up having a similar corruption and we
were still getting the same errors. We had to go back to an October
dump before it would come up cleanly. And
, February 04, 2013 4:35 AM
To: Larry Martell
Cc: wha...@bfs.de; mysql
Subject: Re: log sequence number InnoDB: is in the future!?
2013/2/3 Larry Martell larry.mart...@gmail.com
We also ended up dropping the database and restoring from dumps.
However all recent dumps ended up having a similar
Am 02.02.2013 01:34, schrieb Larry Martell:
On Mon, Jan 28, 2013 at 5:01 AM, walter harms wha...@bfs.de wrote:
hi list,
i am using mysql 5.1.53.
after a crash i have the follwing error in my log:
130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158
InnoDB
On Sun, Feb 3, 2013 at 7:23 AM, walter harms wha...@bfs.de wrote:
Am 02.02.2013 01:34, schrieb Larry Martell:
On Mon, Jan 28, 2013 at 5:01 AM, walter harms wha...@bfs.de wrote:
hi list,
i am using mysql 5.1.53.
after a crash i have the follwing error in my log:
130128 10:45:25 InnoDB
Am 03.02.2013 15:30, schrieb Larry Martell:
We also ended up dropping the database and restoring from dumps.
However all recent dumps ended up having a similar corruption and we
were still getting the same errors. We had to go back to an October
dump before it would come up cleanly. And our
On Mon, Jan 28, 2013 at 5:01 AM, walter harms wha...@bfs.de wrote:
hi list,
i am using mysql 5.1.53.
after a crash i have the follwing error in my log:
130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158
InnoDB: is in the future! Current system log sequence number 0
or is there more ?
re,
wh
On Mon, Jan 28, 2013 at 2:21 PM, walter harms wha...@bfs.de wrote:
Am 28.01.2013 15:01, schrieb Manuel Arostegui:
2013/1/28 walter harms wha...@bfs.de
hi list,
i am using mysql 5.1.53.
after a crash i have the follwing error in my log:
130128 10:45:25 InnoDB
Dump and reload or use some scripting to create and drop some fake data to
increase the lsn towards the 'future' value.
http://dba.stackexchange.com/questions/8011/any-better-way-out-of-mysql-innodb-log-in-the-future
On Mon, Jan 28, 2013 at 12:01 PM, walter harms wha...@bfs.de wrote:
hi list
2013/1/28 walter harms wha...@bfs.de
hi list,
i am using mysql 5.1.53.
after a crash i have the follwing error in my log:
130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158
InnoDB: is in the future! Current system log sequence number 0 2494349480.
InnoDB: Your
Am 28.01.2013 15:01, schrieb Manuel Arostegui:
2013/1/28 walter harms wha...@bfs.de
hi list,
i am using mysql 5.1.53.
after a crash i have the follwing error in my log:
130128 10:45:25 InnoDB: Error: page 61 log sequence number 0 2871649158
InnoDB: is in the future! Current system log
...@bfs.de wrote:
Am 28.01.2013 15:01, schrieb Manuel Arostegui:
2013/1/28 walter harms wha...@bfs.de
hi list,
i am using mysql 5.1.53.
after a crash i have the follwing error in my log:
130128 10:45:25 InnoDB: Error: page 61 log sequence number 0
2871649158
InnoDB
Am 28.01.2013 14:40, schrieb Andrew Moore:
Dump and reload or use some scripting to create and drop some fake data to
increase the lsn towards the 'future' value.
http://dba.stackexchange.com/questions/8011/any-better-way-out-of-mysql-innodb-log-in-the-future
For now i tend to solution 3
-log-in-the-future
For now i tend to solution 3, rsync
do you know is it possible only certain files?
no way
innodb has a global tablespace even with files_per_table
signature.asc
Description: OpenPGP digital signature
Hi,
There was sort of a bug which was fixed in MySQL 5.5 with replication
heartbeat. Before the replication heartbeat, a new relay log file would be
created after every slave_net_timeout. It doesn't have any negative impact
though.
Hope that helps.
From
Hi,
Please re-phrase your question. The relay logs are created as and when
required by the Slave_SQL thread. Once all the events in the relay logs are
executed the relay log would be purged by the Slave_SQL thread.
By setting relay_log_purge=0 you are disabling this automatic purge option.
So
Also, you may want to see, if at all new file is really getting every hour
exactly, if any cron'd script runs, which executes flush logs on the
slave server. That will also rotate relay log.
Cheers
On Wed, Jan 9, 2013 at 1:35 AM, Akshay Suryavanshi
akshay.suryavansh...@gmail.com wrote:
Hi
.
- Original Message -
From: Rick James
Sent: 10/17/12 04:50 PM
To: Kent Ho, mysql@lists.mysql.com, replicat...@lists.mysql.com
Subject: RE: Unexpected gradual replication log size increase.
Check that server_id is different between Master and Slave(s). Check other
settings relating
, replicat...@lists.mysql.com
Subject: RE: Unexpected gradual replication log size increase.
Check that server_id is different between Master and Slave(s). Check
other settings relating to replication. -Original Message-
From: Kent Ho [mailto:k...@graffiti.net] Sent: Wednesday, October 17
that the
IO write to disk are increasing suddenly, creeping up slowly.
Next try find out why? I'm not a mysql guru. I've found a mysql bin log
analyser here:-
http://scale-out-blog.blogspot.co.uk/2010/01/whats-in-your-binlog.html
run it against the logs.
We noticed Max. Event Bytes
: Unexpected gradual replication log size increase.
Hi,
I have a Mysql replicate setup running for a while over 6 months and
recent we had an outage. We fix it, bought the server back up and we
spotted something peculiar and worrying. The replication logs are
growing in size, all of a sudden
VARIABLES LIKE '%log%';
Next. When you physically look in the slow query log, how long does it
say that it took this command to execute?
And last, before you can ask MySQL to fix a bug, you must first ensure
it's a MySQL bug. Please try to reproduce your results using official
binaries, not those
Will do.
mysql SHOW GLOBAL VARIABLES LIKE '%log%';
+-+-+
| Variable_name | Value
|
+-+-+
| back_log
your now() statement is getting executed for every row on the select. try
ptting the phrase up front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.
On 2012-10-16 8:42 AM, spameden spame...@gmail.com wrote:
Will do.
mysql SHOW GLOBAL VARIABLES LIKE '%log
front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.
On 2012-10-16 8:42 AM, spameden spame...@gmail.com wrote:
Will do.
mysql SHOW GLOBAL VARIABLES LIKE '%log%';
+-+-+
| Variable_name
2012/10/16 12:57 -0400, Michael Dykman
your now() statement is getting executed for every row on the select. try
ptting the phrase up front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.
Quote:
Functions that return the current date or time each are evaluated only
That's exactly what I thought when reading Michael's email, but tried
anyways, thanks for clarification :)
2012/10/16 h...@tbbs.net
2012/10/16 12:57 -0400, Michael Dykman
your now() statement is getting executed for every row on the select. try
ptting the phrase up front
as in:
set @ut=
|
++-+---+---+---+---+-+--+--+-+
And if both indexes created I do not have anymore this query in the
slow-log.
Of course If I disable log_queries_not_using_indexes I get none of the
queries.
So is it a bug inside Percona's implementation
| time_priority | priority_time
| 12 | NULL | *22* | Using where |
++-+---+---+---+---+-+--+--+-+
And if both indexes created I do not have anymore this query in the
slow-log.
Of course If I disable
over-sized.
-Original Message-
From: spameden [mailto:spame...@gmail.com]
Sent: Monday, October 15, 2012 1:42 PM
To: mysql@lists.mysql.com
Subject: mysql logs query with indexes used to the slow-log and not
logging if there is index in reverse order
Hi, list.
Sorry
: Monday, October 15, 2012 1:42 PM
To: mysql@lists.mysql.com
Subject: mysql logs query with indexes used to the slow-log and not
logging if there is index in reverse order
Hi, list.
Sorry for the long subject, but I'm really interested in solving this
and need a help:
I've got
Sorry, forgot to say:
mysql show variables like 'long_query_time%';
+-+---+
| Variable_name | Value |
+-+---+
| long_query_time | 10.00 |
+-+---+
1 row in set (0.00 sec)
It's getting in the log only due:
mysql
@lists.mysql.com
Subject: Re: mysql logs query with indexes used to the slow-log and not logging
if there is index in reverse order
Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.
Here is the MySQL-5.1 Percona testing table:
mysql select count(*) from send_sms_test
Ø My initial question was why MySQL logs it in the slow log if the query uses
an INDEX?
That _may_ be worth a bug report.
A _possible_ answer... EXPLAIN presents what the optimizer is in the mood for
at that moment. It does not necessarily reflect what it was in the mood for
when it ran
Thanks a lot for all your comments!
I did disable Query cache before testing with
set query_cache_type=OFF
for the current session.
I will report this to the MySQL bugs site later.
2012/10/16 Rick James rja...@yahoo-inc.com
**Ø **My initial question was why MySQL logs it in the slow log
impossible, though as there's a few easy work arounds.
1) Force all logins to use the PAM or AD authentication plugin -- if the
authentication is success then log it in AD or PAM
2) use a init-connect to log logins but that doesn't work for users with
super privileges as Keith mentioned below (thanks
is success then log it in AD or PAM
2) use a init-connect to log logins but that doesn't work for users with
super privileges as Keith mentioned below (thanks Keith for actually trying
to help!)
3) Write your own plugin using the MySQL Plugin APIs
4) use the McAfee Audit Plugin for MySQL (Free:
http
- Original Message -
From: Singer Wang w...@singerwang.com
2) use a init-connect to log logins but that doesn't work for users
with super privileges as Keith mentioned below (thanks Keith for actually
trying to help!)
That is indeed quite the nifty trick. Thanks, Keith :-)
3
Hello,
I want to find the last time the given list of users logged in.
Is there any mysql table from where i can retrieve the data or anyt
specific sql
Aastha Gupta
There is no such thing. Your application has to deal with such info.
LS
On Oct 4, 2012, at 11:28 AM, Aastha wrote:
Hello,
I want to find the last time the given list of users logged in.
Is there any mysql table from where i can retrieve the data or anyt
specific sql
Aastha Gupta
Am 04.10.2012 17:28, schrieb Aastha:
I want to find the last time the given list of users logged in.
Is there any mysql table from where i can retrieve the data or any
specific sql
no - because this would mean a WRITE QUERY in the mysql-database
for every connection - having a
It is possible in MySQL 5.6
S
On Thu, Oct 4, 2012 at 11:30 AM, List Man list@bluejeantime.com wrote:
There is no such thing. Your application has to deal with such info.
LS
On Oct 4, 2012, at 11:28 AM, Aastha wrote:
Hello,
I want to find the last time the given list of users
can find it in the general query log. Turning that
on is a considerable performance overhead, though, and so is firmly discouraged
on production systems.
--
Linux Bier Wanderung 2012, now also available in Belgium!
August, 12 to 19, Diksmuide, Belgium - http://lbw2012.tuxera.be
--
MySQL
I notice no specification of what kind of users, so I'm assuming DB users.
There *is* such a thing: you can find it in the general query log. Turning
that on is a considerable performance overhead, though, and so is firmly
discouraged on production systems.
--
Linux Bier Wanderung 2012, now
it does not matter what kind of users
usually each application has it's own datanase and it's
own user, the application makes the connection and
can at this point log whatever you want
using the general query log can only be a bad joke
you will log EVERY query and not only logins
again
truly are a gifted individual.
using the general query log can only be a bad joke
you will log EVERY query and not only log-ins
Yes, which is why I specified explicitly that it is very much discouraged for
production use.
However, it can be useful at times. I recently turned it on to investigate
IT IS IMPOSSIBLE
MYSQL CAN NOT DO WHAT THE OP WANT
Regardless of having any background knowledge on the circumstance of the
question, even.
mysql can not an dwill not log user-logins
You truly are a gifted individual.
your opinion, but the answer to the question of the OP
is simply NO you can't
Hi,
2012/10/4 Reindl Harald h.rei...@thelounge.net
Am 04.10.2012 17:28, schrieb Aastha:
I want to find the last time the given list of users logged in.
Is there any mysql table from where i can retrieve the data or any
specific sql
no - because this would mean a WRITE QUERY in the
[mailto:claudio.na...@gmail.com]
Sent: Thursday, October 04, 2012 3:51 PM
To: Reindl Harald
Cc: mysql@lists.mysql.com
Subject: Re: user last activity and log in
Hi,
2012/10/4 Reindl Harald h.rei...@thelounge.net
Am 04.10.2012 17:28, schrieb Aastha:
I want to find the last time the given list
let's say 100 databases and 100 domains with 500 prefork
pcroesses because these would mean in the worst case 5
connections
* enable query log on machines with some hundret queriers
per second would be a self DOS and fill your disks
Am 05.10.2012 01:26, schrieb Rick James:
In looking
My friend Dave Holoboff wrote this up some time ago:
http://mysqlhints.blogspot.com/2011/01/how-to-log-user-connections-in-mysql.html
You know you people sound like children.
Really unprofessional.
Go ahead --- call me names. i left middle school almost 30 years ago. It
won't bother me.
Can
One small correction. Init-connect doesn't require a restart of MySQL. I
was thinking of init-file. So that's even better.
On Thursday, October 4, 2012, Keith Murphy wrote:
My friend Dave Holoboff wrote this up some time ago:
http://mysqlhints.blogspot.com/2011/01/how-to-log-user
2012/9/5 Adarsh Sharma eddy.ada...@gmail.com
Actually that query is not my concern :
i have a query that is taking so much time :
Slow Log Output :
# Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
# Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
time :
Slow Log Output :
# Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
# Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
# Attribute total min max avg 95% stddev median
true Michael, pasting the output :
CREATE TABLE `WF_1` (
`id` varchar(255) NOT NULL,
`app_name` varchar(255) DEFAULT NULL,
`app_path` varchar(255) DEFAULT NULL,
`conf` text,
`group_name` varchar(255) DEFAULT NULL,
`parent_id` varchar(255) DEFAULT NULL,
`run` int(11) DEFAULT NULL,
...@gmail.com]
Sent: Wednesday, September 05, 2012 11:27 AM
To: Michael Dykman
Cc: mysql@lists.mysql.com
Subject: Re: Understanding Slow Query Log
true Michael, pasting the output :
CREATE TABLE `WF_1` (
`id` varchar(255) NOT NULL,
`app_name` varchar(255) DEFAULT NULL,
`app_path
Ok, this raises a question for me - what's a better way to do pagination?
On 9/5/12 2:02 PM, Rick James wrote:
* LIMIT 0, 50 -- are you doing pagination via OFFSET? Bad idea.
--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
Sometimes it pays to stay in bed
...@ihouseweb.com]
Sent: Wednesday, September 05, 2012 2:05 PM
To: mysql@lists.mysql.com
Subject: Re: Understanding Slow Query Log
Ok, this raises a question for me - what's a better way to do
pagination?
On 9/5/12 2:02 PM, Rick James wrote:
* LIMIT 0, 50 -- are you doing pagination via OFFSET
: Understanding Slow Query Log
Ok, this raises a question for me - what's a better way to do
pagination?
On 9/5/12 2:02 PM, Rick James wrote:
* LIMIT 0, 50 -- are you doing pagination via OFFSET? Bad idea.
--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
Sometimes
100 is tantamount to turning off the log. I prefer 2.
select count(ENTITY_NAME)
from ALERT_EVENTS
where EVENT_TIME date_sub(now(),INTERVAL 60 MINUTE)
and status=upper('failed')
and ENTITY_NAME='FETL-ImpressionRC-conversion';
begs for the _compound_ index
INDEX
Actually that query is not my concern :
i have a query that is taking so much time :
Slow Log Output :
# Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
# Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
# Attribute total min max avg 95
Hi all,
I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
by setting below parameters in my.cnf :
log-slow-queries=/usr/local/mysql/slow-query.log
long_query_time=100
log-queries-not-using-indexes
I am assuming from the inf. from the internet that long_query_time
Hi
Because of that, those queries don't use index.
log-queries-not-using-indexes works even if query time less than
long-query-time.
http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_log-queries-not-using-indexes
regards,
yoku
2012/9/1 Adarsh Sharma eddy.ada
Disable log-queries-not-using-indexes to log only queries 100 sec.
Just do /var/lib/mysql/slow-queries.log it will clear the log.
On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma eddy.ada...@gmail.comwrote:
Hi all,
I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
: using the bin-log approach on the master side, how can I
accomplish my replication objectives
Hi Charles,
I believe you would already have bin-log configured, is that right? If not, you
need to.
Secondly, If you think the bin-log generated for the entire stack of
databases/schemas is too big, you
Hi Gurus,
I would like to set BIN-LOG maintenance procedure for my master. The master is
on a windows platform. I’m all for make it simple and clean therefore I’ve been
leaning toward the automatic BIN-LOG removal “expire-logs-days=7”. The
problem is for this option to work, it should
!
Regards,
From: Brown, Charles cbr...@bmi.com
To: Nitin Mehta ntn...@yahoo.com
Cc: mysql@lists.mysql.com mysql@lists.mysql.com
Sent: Thursday, May 3, 2012 4:24 PM
Subject: master BIN-LOG maintenace
Hi Gurus,
I would like to set BIN-LOG maintenance procedure
, 2012 3:17 PM
Subject: RE: using the bin-log approach on the master side, how can I
accomplish my replication objectives
Hello Nitin,
Please give Nitin a prize. What a quiet genius she is. Now, I get it. Now, I
can see clearly.
I’ve tried it and it worked.
Thanks so much.
From: Nitin Mehta
Subject: Re: master BIN-LOG maintenace
Hi Charles,
I guess your application doesn't generate too much of binary logs. The
parameter expire-logs-days kicks in at the flush but does not
necessarily require a manual flush logs command. You can reduce the
value of max_binlog_size to make sure
.
-Original Message-
From: Nitin Mehta [mailto:ntn...@yahoo.com]
Sent: Wednesday, May 02, 2012 9:25 PM
To: Brown, Charles
Cc: mysql@lists.mysql.com
Subject: Re: using the bin-log approach on the master side, how can I
accomplish my replication objectives
Hi Charles,
I believe you
Tables: db2tb1, db2tb2, db2tb3
Database: db3
Tables: db3tb1, db3tb2, db3tb3
Now, I would like to replicate only these tables that belong to respective
databases:
db1tb1, db2tb2, and db3tb3
My question is: using the bin-log approach on the master side, how can I
accomplish my replication
-rules.html
Hope this helps!
From: Brown, Charles cbr...@bmi.com
To: Rick James rja...@yahoo-inc.com; a.sm...@ukgrid.net
a.sm...@ukgrid.net; mysql@lists.mysql.com mysql@lists.mysql.com
Sent: Thursday, May 3, 2012 8:51 AM
Subject: using the bin-log approach
Hi Charles,
I believe you would already have bin-log configured, is that right? If not, you
need to.
Secondly, If you think the bin-log generated for the entire stack of
databases/schemas is too big, you may want to restrict it using binlog-do-db
BUT that may create problem if you have any
files in the directory where the log files are?
This seems to be the reason. MySQL is run under mysql user and the log
file is located under /var/log in Fedora, so the daemon doesn't have
enough privileges. It's clear now, we'd need to un-comment the line in
such configuration.
Thanks
Hi all,
I'm thinking of logrotate script, that is shipped in mysql tar ball
(e.g. mysql-5.5.20/support-files/mysql-log-rotate.sh). There is a
commented line # create 600 mysql mysql, that should originally ensure
logrotate utility creates a new log file after rotating. Is there any
1 - 100 of 1127 matches
Mail list logo