Marian Hettwer wrote:
As soon as replication starts, mysql gets very unresponsive:
- -bash-3.1$ time mysqladmin -uroot -p proc stat
Enter password:
+----+-------------+-----------+----+---------+------+-----------------------------------------------------------------------+------------------+
| Id | User        | Host      | db | Command | Time | State
                                                     | Info             |
+----+-------------+-----------+----+---------+------+-----------------------------------------------------------------------+------------------+
| 4  | system user |           |    | Connect | 204  | Waiting for
master to send event                                      |
     |
| 5  | system user |           |    | Connect | 8661 | Has read all
relay log; waiting for the slave I/O thread to update it |
    |
| 7  | root        | localhost |    | Query   | 0    |
                                                     | show processlist |
+----+-------------+-----------+----+---------+------+-----------------------------------------------------------------------+------------------+
Uptime: 308  Threads: 1  Questions: 6328  Slow queries: 0  Opens: 0
Flush tables: 1  Open tables: 24  Queries per second avg: 20.545

real    0m15.463s
user    0m0.010s
sys     0m0.020s

15 bloody seconds to return mysqladmin proc stat ?
That ain't good.

Wasn't it that your slave actually catch up to the master and replicate all the tables your master had?

You don't provide mysql.err logs, etc and we don't know if it actually replicate your tables or not. I guess from this it did.

Let see 308 seconds up only for the server, did 20.5 query per seconds for that time with would be your 6328 queries there, of witch all finish based on this show process and also looks like it finish to mirror it and now is waiting for the master to send more.

I don't know. But with what we have here. This is how I see it. May be I am wrong, but lets see.

I have no clue how big your database might be or not. Nor how many tables, etc.

The only think I know is that you did install from packages. Great. Then started master/slave and look like it worked.

Then you were trying to query the server I guess for data may be, but it was up only for 5 minutes and based on the query listed, etc. It was really busy to mirror the data from the master to the new slave. Wasn't it what it was doing here?

So, are you expecting to have all the data ready as soon as you start MySQL on a slave. Look like you were expecting it to be ready right away? I don't know but I know this. To query data, it has to be there first.

To me look like the box was up 308 seconds and started to mirror the source, updated that databases/tables, may be creating the index as it goes, I don't know your data see, and may be some of your data was requesting the table to be lock when updated instead of may be insert delay or something like that.

But now that's it's been up for 216000 seconds, how does it work?

Is your data mirror well or not. Any error in mysql.err file or not. Responsive to query or not?

I just wonder if you expected it to be all mirror and ready as soon as you issue the start slave?

I do know MySQL quite well (MySQL 4.1 in fact) and for the OpenBSD
Installation I followed the guidelines at www.openbsdsupport.org, which
was basicly just increasing the kern.maxfiles and changing /etc/login.conf

There was more then that, including to make sure you start the daemon with the class as well, etc.

http://crivens.terrorteam.de/~rabauke/OpenBSD/MySQL/my.cnf.txt

Ah, I nearly forgot:
- -bash-3.1$ sysctl kern.maxfiles
kern.maxfiles=8096

Did you notice that the suggeted configuration have double the allow files in the kernel oppose to the my.cnf configuration?

There is a reason for that. Se the top of the document said that when you open about 29 tables, you will get the error 9. However you see 64 in the default limits right? Why you think that is? MySQL documentations does explain that mysql always open two files minimum per tables in most cases, so may be mysql should rewrite the meaning of max_open_files in the configurations. But anyway, simple rule of thumb. But twice the number of files limits in sysctl as you put in my.cnf, not the same as you do here.

Not that I think you hit that limit here as you didn't say anything about error 9, but should one day start to have a lots of tables and come close to this limits, then you will not know why that is.

- -bash-3.1$ sudo su -m _mysql -c "ulimit -a"
time(cpu-seconds)    unlimited
file(blocks)         unlimited
coredump(blocks)     unlimited
data(kbytes)         1048576
stack(kbytes)        8192
lockedmem(kbytes)    635692
memory(kbytes)       1905588
nofiles(descriptors) 128
processes            532

This show you define a class _mysql, doesn't show your daemon is running using it however at this time.


Don't forget that the man page is clear on that subject the class is use on login, but here the daemon doesn't login, see master.passwd. You start it manually from your shell, it will use the default class, not the _mysql class, unless you expressly tell it to do so.

This is almost every time overlooked.

Many thinks that by defining the class, daemon will use it.

But when you start the daemon are you that user? No, you are not, so it can't use that class then, but will use the default class. Just a word of caution on this part as well if you didn't do it too.

Proof.

# su _mysql
This account is currently not available.

So, it can't then use the _mysql class when you start it, unless you do specify it.

su -c _mysql root -c '/usr/local/bin/mysqld_safe &' > /dev/null &

Hope this help you some.

Daniel

Reply via email to