Re: InnoDB, 1 file per table or 1 BIG table?

2006-10-09 Thread Bruce Dembecki

On Oct 9, 2006, at 7:15 AM, Ow Mun Heng wrote:


Hi All,

Just wanted to know if it would be faster/better to implement this
option into my.cnf

innodb_file_per_table = 1

which would essentially make each table a file on it's own rather than
have it all in 1 file.
My belief is that it would be slightly more advantageous compared to 1
BIG file.

eg: 1 10GB file would perform poorer than 10 1GB files.

Is this statement true and how far is is true?


There are some minor performance benefits here when run against  
benchmarks... but tiny. It is generally true that for tiny tiny  
tables, it's a lot easier to find a 20byte row in a 100K file than it  
is finding it in a 40Gbyte file. While that is true, InnoDB is pretty  
efficient, and really knows how to pull data out of the big table  
space well... most of the benchmark gains we've seen and others have  
reported are in the sub 1% area... of course every little bit helps.


There are some general management benefits for some people, others  
may find the changes less helpful, depending on your circumstances  
etc... For us, little things make a difference... for example, a  
corrupt InnoDB table file under file_per_table means only one table  
is at risk as opposed to the entire database. Someone else mentioned  
Optimze Table advantages and freeing up disk space.


One of the big things that really really helps us is having files  
that are appropriate in size for the data... That means mostly our  
data files are well sized for our data... before we had 40G of table  
space and usually only 15G of data... backups and file copies and so  
on involved copying all 40Gbytes of the table space... now the backup  
processes and other things we do which involve moving data files  
around only move the actual amount of data we have, not all the empty  
table space left for growth... big performance gain when moving files  
around the network.


Files can still be stored on different storage devices by making  
symlinks within the data directory for specific database directories  
or even specific table files.


We really really like innodb_file_per_table - but mostly because it  
makes our lives easier in many ways, not so much for performance  
reasons.


Best Regards, Bruce.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: move to 64 bit

2006-01-11 Thread Bruce Dembecki
Hi! Essentially this is true. However let me qualify that a little...  
Clearly to take advantage of 64 to you'll want to change your memory  
settings and allocate something in my.cnf over 2G. Also you'll want  
to be using OS X 10.4.n, 10.3 and earlier don't really support 64  
bit. Now that the truly basics are done...


Beware, we encountered a problem with OS X 10.4.n, any 64 Bit MySQL  
and InnoDB... Under the right mix of conditions when you allocate  
Innodb more than 2Gbytes of memory it is possible at some point to  
hang the machine, and it will restart 5 minutes later when the system  
watchdog processes cycle the power supply to fix itself. Turns out  
its an OS bug, and it's the sort of thing where you need the Sun,  
Moon and 18 planets to align together to happen, which made it a  
little difficult to track down at Apples end. The good news is I'm  
told they found it and nuked it, and we will likely see the fix  
included in OS X 10.4.5... just wanted to warn you in case you are  
using InnoDB and you start to see the Sun line up with the Moon and...


Other than that, go for it, it all fits together beautifully,  
changing the binary is all you need to do.


Best Regards, Bruce

On Jan 11, 2006, at 7:19 AM, Roland Carlsson wrote:


Hi!

What must I do to move our mysql-database from 32-bit binaries to  
64-bit (mac os x). Is it as simple as just changing binaries for  
must I prepare the data-files somehow?


Thanks in advance
Roland Carlsson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Background tasks performed by MySQL?

2005-11-15 Thread Bruce Dembecki
I  would expect this to finally be something on the client end,  
rather than the server end... is there a search index that gets  
rebuilt periodically? Maybe some reports that get generated against  
the data? The last example that comes to my mind is if you use a  
client that caches data, does the cache get dumped or the client  
restarted at some point?


These are the sort of things that we found led to the type of  
behavior you are talking about. Let me give clearer examples... Our  
databases typically do Discussion Boards... usually very large scale  
discussion boards (think eBay or HBO scales). The discussion board  
server (in this case the database client) keeps it's own search  
index, but need to update it on a regular basis to keep it current.  
If that period is too infrequent or the queries poorly optimized,  
they can generate a lot of load on the database, and you get the type  
of results you are seeing. Or if the discussion board tries to  
analyze the stats for the last day (or week or month etc) to provide  
information for reports... in our example a million page views a day  
means a million stats records a day, and any analysis can be quite  
the load generator. Same thing with our cache on our discussion  
board... if our discussion board has been up for some time it has all  
the messages most frequently used already in local cache, it doesn't  
do a query to recover each message in this situation... an instance  
of the discussion board going live into production with no data in  
the cache can mean a huge database hit for a few minutes while the  
caches in the discussion board get populated.


These are just examples from our life, but I'm pretty sure when al is  
said and done that the cause will be some process that your client is  
generating to do something periodic, rather than the MySQL Server  
running some sort of process, which we've never seen.


Take a look at the process list when it is in one of these cycles  
(from the mysql command line client type show processlist;). it  
should give you a pretty good idea of what's doing what at the time  
and will give you some idea on where to look.


Best Regards, Bruce

On Nov 15, 2005, at 2:20 AM, Viktor Fougstedt wrote:



Hi.

We have a MySQLd with both MyISAM and InnoDB tables that at uneven  
intervals stops responding correctly to connections.


At all times, about one connection per minut fails, regardless of  
which database and/or user and/or remote host is connecting. The  
same connection parameters (and same queries) work correctly 99.9%  
of the time, and it is entirely random which connections time out  
and when.


We can live with that problem, which does not seem to have any  
explanation.


But some times, MySQLd starts taking all the CPU it can get, and  
gets extremely sluggish for a few minutes. At these times, several  
connections every second are rejected because of timeouts. These  
rejections we can't live with.


To attempt solving the problem, I've started thinking that there  
might be some form of periodical cleanup that MySQLd or InnoDB  
performs automatically, and that we could force it to perform at  
night when the expected load is lower.


Is there any such background cleanup performed? It could be  
periodical, when a certain number of queries/updates/inserts have  
been run, or when some query cache or similar gets full?


If these problems or descriptions somehow ring a bell, I would  
welcome any insight I could get from the list.



Thanks in advance,
/Viktor...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM to InnoDB

2005-09-28 Thread Bruce Dembecki

On Sep 28, 2005, at 5:21 PM, Devananda wrote:


Jeff wrote:
Lots of stuff goes in here...


So without going into specifics here... your sort_buffer and  
read_buffer become pretty much unimportant if you move everything to  
InnoDB... keeping in mind the earlier advice to leave the mysql  
database intact in myisam. So if you went that route you can set them  
to like.. 32k or something trivial and even the 250 x multiplier  
doesn't hurt you much here. Once you do that of course you should  
give InnoDB most of the  memory.


On the disk side the one comment I didn't see is that if you have  
2.2G of data already, you need 4.5G of InnoDB table space to put it  
all together. InnoDB needs (as a rough guideline) twice as much table  
space as there is data, so it can do things like indexes, and undo  
logs and so on... Your 2 x 2G files isn't going to cut it as a place  
to put your 2.2G of data. In your case for these guys I'd throw 3 x  
2G files at it and a fourth auto extending file - more if you move  
your other tables... most of our database instances had 20 x 2G files  
in their InnoDB table space before we moved to innodb_file_per_table  
where it stopped being an issue.


Best Regards, Bruce



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problems setting up on Mac OS X

2005-09-28 Thread Bruce Dembecki
On OSX server there is a copy of MySQL already installed and just  
installing the MySQL binary doesn't necessarily bypass it. Try  
logging on as root with no password, you may still be booting from  
Apple's data directory rather than MySQL's... starting mysqld using / 
usr/local/mysql/bin/mysqld_safe  will generally avoid that  
problem... this is the one MySQL installed.


You should also create a /etc/my.cnf file that will tell it what data  
directory you want and all your other specific settings.


It can be quite troublesome getting around Apple's install as they  
tend to install it back again at various system updates.


Best Regards, Bruce

On Sep 28, 2005, at 11:52 AM, Jim C. wrote:


I had the same problem with both of these:

mysql-standard-4.1.14-apple-darwin7.9.0-powerpc.dmg
and
mysql-standard-4.0.26-apple-darwin7.9.0-powerpc.dmg

They seem to install OK, but when I set the root password I can't  
get in

afterwards as root.

Clues?

Jim C.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM to InnoDB

2005-09-22 Thread Bruce Dembecki
You will need to make sure you have innodb configured in the my.cnf  
file and you have enough space built for it in the shared table  
space. InnoDB also needs it's own memory pool, so make sure you give  
it enough memory. For day to day issues there is no problem doing  
innodb/myisam replication, with a couple of small caveats... an  
ALTER TABLE would replicate and thus... may change the table type  
from myisam to innodb or vice versa depending on which server the  
ALTER TABLE came from. To go with that the original conversion from  
myisam to InnoDB would also need to be done in such a way as to not  
be replicated.



Remember that an ALTER TABLE that could have an impact could be as  
simple as adding or dropping an index... although usually very simple  
alter table statements like that can be done without defining the  
table engine, some GUIs may however insert that for you on even the  
simplest ALTER TABLE commands.


Best Regards, Bruce

On Sep 22, 2005, at 7:59 AM, Jeff wrote:


Hey all,

I've got a production database that made up of all MyISAM tables.  I'd
like to change some of the more heavily written to tables to InnoDB to
take advantage of the record level locking and thus improve write
performance of our applications.

I currently have a second db server that is replicating from the  
current
production system but not in production yet. I'd like to try to  
convert
it to InnoDB.  MySQL version is 4.0.16.  It it as symple as just  
issuing

the modify table query or are there problems I should be aware of when
doing this?

Also are there known problems replicating from A - B - A (circular
replication) when A had Table1= InnoDB and B has Table1=MyISAM?

Thanks,

Jeff



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Circular Replication

2005-09-22 Thread Bruce Dembecki

On Sep 21, 2005, at 5:23 AM, Jeff wrote:


I am interested in how you go about doing a delayed replication to
protect against operator error.  We've already fallen victim to that
situation here.



The long story short is we use the fact that MySQL has the ability to  
run the SQL thread and the IO thread of replication separately, and  
control them individually. In practice we use cron and a whole bunch  
of scripts to stop the I/O thread (the one reading from the master)  
most of the time, and manage when the SQL thread replicates... eg at  
4:00 cron stops the SQL thread. At 4:01 we start the I/O thread (this  
can read a lot of changes very quickly from the master, so only need  
a short time to catch up with all the changes). At 4:05 we stop the I/ 
O thread. Then we wait a few minutes to give ourselves a buffer...  
then finally at 4:15 we start the SQL thread and repeat the cycle  
every two hours.


The upshot is at the small end we are 10 minutes behind (the time  
between we stop I/O at 4:05 and the time when we start SQL at 4:15),  
and at the long end we are 2 hours behind (at 4:07 for example the  
last query that the SQL thread could have executed came from the  
master at 2:05).


Our scripts are a little more complicated to marry into our  
monitoring system without setting off alerts that replication has  
stopped and so on (and of course the machine that runs this speaks to  
many masters using many instances of MySQL, so we need to manage this  
for every instance of MySQL). We also manage things to allow an  
emergency stop by having the scripts do an existence check on a  
specific file, and if the file isn't there don't start any  
replication processes. We then have a stop script which tells the  
instances to stop whatever they are doing and deletes the file. At  
that point replication can't resume until we replace the file  
manually - we tie that emergency script to a TCP port and hey  
presto... in the event of an emergency all someone needs to do is hit  
the right tcp port on the server (telnet to it, hit it with a  
browser, anything that will cause the port to see some activity) and  
all the replication comes to a stop.


Also as part of our 2 hourly cycle we do a lot of binary log flushing  
on the slave and the masters, so if we ever need to roll back we can  
roll back to a specific point in time and only have to deal with  
fixing problems in the logs form that point in time onwards. if an  
operator error gets by before we can stop we can go to yesterdays  
backup and only execute those binary logs from before the incident,  
and then deal with the issue in question.


This process has reduced our downtime in the event of a total  
database corruption from four hours to recover from yesterdays data  
and be missing everything since, to 30 minutes and be only missing  
the data since the last 2 hourly roll over. And it doesn't take long  
to dump the last set of binary logs to a text file, find and fix/ 
remove the corrupting command and apply that whole log into the  
database, effectively giving us almost zero lost data and back online  
in no time (although when clients are screaming even 30 minutes feels  
like an eternity). This is all of course so much better than the four  
hour downtime we had before this system.


And there are side benefits... for example backups are easier to do  
because the data isn't being changed except for a few minutes every 2  
hours. Instead of co-ordinating timing scripts and locking tables and  
doing dumps and so on we can do simple file system duplication of the  
data directories.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MyISAM to InnoDB

2005-09-22 Thread Bruce Dembecki


On Sep 22, 2005, at 11:46 AM, Jeff wrote:


True, is there a way to tell a slave to not replicate certain queries
like alter table or would I need to get creative and stop replication
and all writes to the main database, then issue the alter table
statement, then restart replication with a  set global
slave_sql_skip_counter=1 so that it skips the alter statemtent?




On a single connection use:

SET SQL_LOG_BIN = 0

It's a connection variable, the default is 1, which means queries  
altering the data get written to the bin log... changing this to 0  
means data altering commands from this specific connection do not get  
written to the binary log... It's best not to leave a connection  
lying around with this setting because it's the sort of thing you  
forget about and later end up with data inconsistencies. However  
short term use by turning it off, doing your thing, and turning it on  
again usually works without trouble... eg:


SET SQL_LOG_BIN = 0;
ALTER TABLE some stuff here;
SET SQL_LOG_BIN = 1;

Not all users have permission to issue such a command.


If I understand what you're saying here, some MySQL front end gui
software will add onto any Alter table statement you submit a
statement specifying the type of table like myisam automatically.   
So if

you used that gui and tried to issue an alter statement to say add an
index to a InnoDB table it would add on a table type = MyISAM and  
cause

havoc?

Normally I don't rely on gui tools to do my serious quiries like
altering tables or adding indexes etc.  I'll do them logging directly
into mysql server on the linux box itself.  In this case there  
shouldn't

be a problem correct?


Some GUI's take simple steps and write them out into their full long  
SQL format... whereas adding a table's engine or type to an alter  
table is optional in MySQL, officially it is suppose to be there...  
so some GUI's put it there... typically if you haven't told it to  
change the table type it will just use whatever table type it is  
now... but the end result in the binary log will still go to the  
other server and potentially change something there.


There shouldn't be a problem using the mysql command line client...  
but I'm going to emphasize shouldn't here... when you have two  
different table types on master and slave you need to be **really**  
sure you don't mess that up.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Circular Replication

2005-09-20 Thread Bruce Dembecki

On Sep 16, 2005, at 11:07 AM, Jeff wrote:


There shouldn't be a problem if:

server A is ver 4.0.x
server B is ver 4.1.x

should there?



There will totally by a problem here... The 4.1 server will take the  
4.0 feed without issue. The 4.1 server however puts all sorts of  
information into the binary log which isn't in the original query,  
like what sort of collation to use, and which character set it uses  
and so on... 4.0 doesn't understand such commands and lots of things  
break in this situation.


As a side note we deploy servers in pairs, with circular replication.  
We did three and four server circles, but it gets messy if  
replication stops somewhere, the data becomes unpredictably  
inconsistent (assuming all the servers in the circle are getting  
production updates). Now we do simple two way replication between a  
pair, and we hang a third server off the pair somewhere just  
reading... the third server we use for backups, data dumps, reports  
and other non production issues. Essentially it is something like A- 
B-C, where A and B have two way replication and C is used for  
backups/reports etc... anything that changes the data happens on A or B.


We do some other black magic to manage the replication on C so it's  
perpetually behind the master servers by between 15 minutes and 2  
hours... that way if we have a stupid operator error or some other  
data corrupting event we can stop replication on the backup server  
before it executes and start from there rather than having to go back  
to yesterdays backup or something.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Myisam or innodb

2005-09-15 Thread Bruce Dembecki
For high volume discussion board type work InnoDB is faster. Our  
slow query log droped 66% just by switching engine types.


Earlier comments about innoDB not supporting full text are actually  
supposed to read InnoDB doesn't support full text indexes... This is  
only important if you really need a full text index.


If you want transactions, ie a series of queries tied together and  
all executed in sequence or all rolled back if there's an issue, then  
you must choose InnoDB.


There are some areas such as Data Warehousing that benefit from  
MyISAM, and there are some that benefit from InnoDB.. for the  
majority though there's no clear choice, and I understand that most  
people who have tried direct comparisons end up choosing InnoDB based  
on performance. If you don't have a clear reason for using one over  
the other, compare both with appropriate memory settings and see how  
you go.


In a hosting environment you either need to use 4.1 or higher and  
innodb_file_per_table to make sure each table get's it's own file...  
or you need to make sure you allocate a LOT of disk space to the  
shared InnoDB table space to make sure you don't run out.


As to a setting to change the default... the book says:

If you omit the ENGINE or TYPE option, the default storage engine is  
used. Normally this is MyISAM, but you can change it by using the -- 
default-storage-engine or --default-table-type server startup option,  
or by setting the storage_engine or table_type system variable.


which in plain language means adda line like this to your my.cnf file:

default-storage-engine=innodb (or myisam or whatever you want it to be)

Users could then still create a table of a different type, but unless  
they specify something it will be whatever you set to be the default.  
Also be aware that memory settings behave differently for different  
storage engines. InnoDB needs it's own memory... MyISAM doesn't use  
InnoDB's memory... and so on. If you set your default to be InnoDB  
but a user still sets up something in MyISAM then you need to make  
sure there is some memory for MyISAM.


Final note.. MySQL needs the mysql database to exists and for the  
tables it cares about to be MyISAM... these hold your user  
permissions and so on... don't try converting this to InnoDB, that  
would be bad.


Best Regards, Bruce

On Sep 15, 2005, at 2:31 AM, Scott Haneda wrote:

I have moved my data from mysql 3 to 4, across various updates,  
over the
past few years. I use phpmyadmin generally, and it defaults to  
making tables

myisam.

I can not seem to find a really clear answer as to why I want to  
use one
over the other.  Generally, I use myisam and change it only when I  
need to

do a rollback or something like that.

Since I seem to be able to toggle from myisam to innodb without any  
adverse

affects, why would one chose one over the other?

Is there a way to prevent the use of myisam if it is old and slowly  
being
deprecated?  I am a host and do not control the table structure of  
clients

databases.
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Total newb at performance tuning mysql

2005-09-09 Thread Bruce Dembecki
One of our engineers first installed MySQL on one of our Sun boxes  
which was doing nothing more than MySQL... It seems we also put it on  
the server and turned it on... it behaved very badly. Essentially  
when we started to investigate MySQL and find out if we could use it  
we discovered that our Sun box with four processors and 4Gbytes of  
ram was running MySQL in 64M of memory... it's an easy mistake to  
make, and the lesson here is that out of the box (as it were) MySQL  
settings are a little on the low side for performance... but work  
well for a shared environment where you may have web server, mail  
server and more all running on the same box. If you want MySQL to  
sing... you are going to have to do a lot of tuning.


On the table_cache issue... We have about 40 tables per database, and  
some of our servers have 30 databases. Our servers have as many as  
500 connections... one server at random which has an uptime of 60  
days shows:


mysql1 (none): show status like 'open%_tables';
+---+---+
| Variable_name | Value |
+---+---+
| Open_tables   | 2748  |
| Opened_tables | 3288  |
+---+---+
2 rows in set (0.01 sec)

mysql1 (none): show variables like 'table_cache';
+---+---+
| Variable_name | Value |
+---+---+
| table_cache   | 4096  |
+---+---+
1 row in set (0.01 sec)

So we have a table cache value, but it's clearly on the high side and  
could be lower. Yours at 64 is on the low side.


While I won't ignore table cache as being important, there are many  
many performance tuning things that need to get done to have an  
impact on the server.


Firstly (knowing the type of things you are doing) InnoDB will likely  
be a far better choice for most of your tables than the default  
database engine, myisam. You need to tune the machine to run InnoDB,  
and then convert your data to InnoDB. If you are using MySQL 4.1  
(which I can't remember) I would advise using innodb_file_per_table.  
If you are looking at upgrading to 4.1 I'd do that first before  
switching to innodb_file_per_table... it's a little hard to claim  
back the shared table space after the fact. If not 4.1 then go with  
InnoDB and build a big enough shared table space file set to hold all  
your data with room to spare. We typically build it with 20 2Gbyte  
files... for 40Gbytes of InnoDB table space.


Decide how much memory you have to run MySQL... i the server does  
only MySQL, this is easy... if it's also a web server running Apache  
and so on, then you have to decide the mix. Assuming MySQL only give  
InnoDB 80% of the server's total memory, up to certain limits with 32  
bit operating systems and the like... For OS X we found these are  
pretty much the magic numbers for max values if you have more than  
2Gbytes of ram but can't handle 64 bit:


 innodb_buffer_pool_size=1850M
 innodb_additional_mem_pool_size=256M
innodb_log_files_in_group=2
innodb_log_file_size=250M
innodb_log_buffer_size=20M
innodb_thread_concurrency=16
innodb_flush_log_at_trx_commit=0
innodb_lock_wait_timeout=30

Once you convert everything to InnoDB the regular MySQL buffers have  
less importance, but should still have some values.


InnoDB or not the query_cache is a good thing, but don't set it too  
high... We are at 128Mbytes and that's a little higher than we  
need... it appears we could live in under 64Mbytes, and our query  
cache handles about 25% of our queries... properly set it's a good  
thing.


So... most likely switching to InnoDB will improve performance...  
Managing all your memory settings and caches so that the ones that  
matter have values that will help will make a great deal of  
difference... of course to do this you'll need to make a my.cnf file  
and install it where MySQL will look for it.


Also important for tuning is watching the slow queries, finding out  
if there are moe things you can do with indexes, or if there are  
other ways to optimize the queries. Turn on the slow query log...  
leave it set to the default 10 seconds... find out what queries are  
running longer than 10 seconds and figure out how to optimize them...  
changing indexes, changing the query etc... Once you have worked that  
out and your slow query log gets few hits, reduce it to 5 seconds and  
work through those queries... again reduce it further as you work out  
the performance issues and you'll find that you are streaming along.


There's a lot more that can be done with specific memory settings and  
so on... but I think I've given you a handful of things to get  
started on, and you can come back for more when you have made some  
headway on this part.


Best Regards, Bruce

On Sep 7, 2005, at 4:28 PM, Scott Haneda wrote:


Unless I am totally off base here, the mysql docs tell very little in
regards to how to performance tune mysql.

So far I have found this article to help:
http://www.databasejournal.com/features/mysql/article.php/ 

Re: Recommendations on new hardware

2005-09-08 Thread Bruce Dembecki
Yes... OS X 10.4 with a 32 but MySQL binary is stable... it is the  
combination of 64 bit OS (Tiger), and the 64 bit MySQL binary, and  
accessing more than 2Gbytes of memory within the mysqld process that  
blows up the machine. You can also run the 64 bit binary but keep the  
memory allocation below 2Gbytes (but thats pretty pointless).


64 bit is important if you have large data sets, the more you can  
keep in memory instead of relying on disk access, the better things  
run... if your regularly accessed data isn't so large, no issue  
anyway... if you don't have more than 2Gbytes of memory to get at,  
also no problem.


And of course until it is worked out, stay within the limis of 32 bit  
and you don't have a problem :-)


Best Regards, Bruce

On Sep 7, 2005, at 8:51 PM, Scott Haneda wrote:


on 9/7/05 8:42 PM, Bruce Dembecki at [EMAIL PROTECTED] wrote:



Yeah, 64 bit isn't working... we can set the memory partition for
InnoDB to some big number, like say 10G or more (on the 16G Xserves),
and it will launch, so it has 64 bit OS and 64 bit MySQL Binaries...
We get past the first hurdle, but in practise it just doesn't run...
It works fine until InnoDB gets to more than 2GByte of memory it's
actually truing to use (as opposed to reserving when it launches),
and then the server locks up (OS level.. not a mysqld crash) After 5
minutes the watchdog timer kicks in and reboots the machine.

So I don't know if it's an Apple issue or a MySQL issue... the 64 bit
MySQL binary does it, the 64 bit binary I made with mySQL source does
it, 10.4.0 does it, 10.4.1 does it, 10.4.2 does it... etc...

I'm sure it's something we will get resolved, just for now we can't
run 64 bit. So we'll keep working with both our Apple and MySQL
contacts to get each of them trying to find the problem... but (being
the open minded people we are) we'll also take a look at Yellow Dog
and see what's involved in getting Linux up and making a 64 bit MySQL
Binary to run under Yellow Dog.



Sorry if this is boneheaded here, but are there non g4 bit variants  
you can

run on a G5 if you just want the stability until this is worked out?
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Recommendations on new hardware

2005-09-07 Thread Bruce Dembecki

On Sep 6, 2005, at 11:09 PM, Scott Haneda wrote:


After reading this:
http://www.anandtech.com/mac/showdoc.aspx?i=2436 I suspect OS X  
is just

not going to cut it.



So while I think it is beneficial to be open to new things at all  
times, there are as always two sides to any story. The Anandtech  
articles stem back to a test they developed which essentially  
measures how long it takes to open and close a connection to MySQL...  
and then relating that and pretty much that alone to scalability.


Speaking as someone who knows about scale and deals in 100M+  
dynamically created page views month after month let me say that  
opening and closing a connection to MySQL is NOT a good measurement  
of this. That said we use MySQL on our Mac servers to serve each and  
every one of those dynamically created pages.


Anandtech says over several trials they have tried various MySQL  
config settings to fix or improve things, of course they list the  
settings they have tried. Guys... you are taking issue with time to  
establish a thread... try the thread cache (duh)!!


More importantly... If you are really focussed on scale you need to  
look at many parts of your technology, not just your database  
servers. One of the issues demonstrated by this review is that PHP  
which opens a connection each time you access the database may not be  
the best way to go if you are a high volume site, but moreover use  
something that utilizes a connection pool, such as JDBC (which of  
course would require you use Java for your application, rather than  
PHP). I'm not suggesting that there is something wrong with PHP or  
that everyone should code in Java... I'm just saying there are many  
technology choices that go into determining what will work and how  
well it will scale, and this Anandtech article focuses on one that  
doesn't apply to many many people.


Now IF you are relying on PHP and IF you are receiving high volume  
and IF your MySQL server isn't performing well enough, then you may  
be in the position that the Anandtech article applies. One solution  
(the one they are seemingly presenting) may be to change the database  
server's hardware platform. But it's not the only solution, and you  
should look beyond this one issue to make sure you are choosing  
something appropriate to your actual needs.


We're happy with our Mac based MySQL servers in many respects. We've  
got some 64 bit issues that are causing a little grief, so we're  
looking at our options... Obviously working with Apple and MySQL to  
determine the real reason for the 64 bit failures will be high on the  
list. At the same time we're going to take a look at Yellow Dog's  
Linux for Power PC and then we can do a direct comparison and see  
what differences a change in Operating System makes to us in real  
life, not just what effect it has on Anandtech's benchmarks, which  
don't represent how we work at all. Once we've looked at Yellow Dog  
we'll be happy to talk about our experiences there too.


In general though, we will say that the XServe G5/OS X combo works  
better for us in most everything than our Sun Servers running  
Solaris our XServe's handle twice the load of Sun V240s on our  
Java based web applications, for half the price. But we're also  
willing to look at a Linux variant to see if it helps us in database  
land when teamed with 64 bit... I have a couple of 16Gbyte ram  
XServes I want to see how hard I can push in database land. In  
database land our G5 Xserves with 8Gbyte so far outperformed a Sun  
V440 with 4 processors and 16Gbytes of ram it wasn't funny... so OS X  
and Mac boxes isn't such a terrible thing


Give some thought to Yellow Dog... if you're going Linux anyway why  
not work with the hardware you have, $89 for the OS with installation  
support seems less dramatic than switching your whole hardware  
platform... but also give some thought to how you are using MySQL and  
where the performance issues really are, not just the single issue  
that one set of tests keeps focussing on (this is the third in a  
series of articles from Anandtech focussed on opening connections to  
a MySQL server on OS X - there is so much to performance than this  
one piece of the puzzle, and there are plenty of solutions which  
don't mean throwing out your hardware).


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Recommendations on new hardware

2005-09-07 Thread Bruce Dembecki
Yeah, 64 bit isn't working... we can set the memory partition for  
InnoDB to some big number, like say 10G or more (on the 16G Xserves),  
and it will launch, so it has 64 bit OS and 64 bit MySQL Binaries...  
We get past the first hurdle, but in practise it just doesn't run...  
It works fine until InnoDB gets to more than 2GByte of memory it's  
actually truing to use (as opposed to reserving when it launches),  
and then the server locks up (OS level.. not a mysqld crash) After 5  
minutes the watchdog timer kicks in and reboots the machine.


So I don't know if it's an Apple issue or a MySQL issue... the 64 bit  
MySQL binary does it, the 64 bit binary I made with mySQL source does  
it, 10.4.0 does it, 10.4.1 does it, 10.4.2 does it... etc...


I'm sure it's something we will get resolved, just for now we can't  
run 64 bit. So we'll keep working with both our Apple and MySQL  
contacts to get each of them trying to find the problem... but (being  
the open minded people we are) we'll also take a look at Yellow Dog  
and see what's involved in getting Linux up and making a 64 bit MySQL  
Binary to run under Yellow Dog.


Best Regards, Bruce

On Sep 7, 2005, at 4:23 PM, Scott Haneda wrote:


on 9/7/05 2:40 PM, Bruce Dembecki at [EMAIL PROTECTED] wrote:



We're happy with our Mac based MySQL servers in many respects. We've
got some 64 bit issues that are causing a little grief, so we're
looking at our options... Obviously working with Apple and MySQL to
determine the real reason for the 64 bit failures will be high on the
list. At the same time we're going to take a look at Yellow Dog's
Linux for Power PC and then we can do a direct comparison and see
what differences a change in Operating System makes to us in real
life, not just what effect it has on Anandtech's benchmarks, which
don't represent how we work at all. Once we've looked at Yellow Dog
we'll be happy to talk about our experiences there too.



Ok, you are almost selling me on getting an Xserve, can you tell me  
a bit
about the 64bit issues and how they affect me?  I have someone who  
may just
donate my a xserve, one of the older ones, but still, not a bad  
piece of

hardware at all.
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Recommendations on new hardware

2005-09-07 Thread Bruce Dembecki
And one other thing... our smallest load database server was a pair  
of G4 XServes, running about 300 queries per second, taking 5%CPU on  
the top display (which on OS X is 5% of one CPU) We ran the same  
load on a pair of Sun V440 Quad processor with 16Gbytes of memory and  
it used 30 - 50% CPU (which on Solaris is the percentage of all the  
CPUs combined). Even switching back to a 32 bit Solaris binary we had  
major performance loss compared to the Macs... the bus speed is an  
issue on the Sun boxes... fast processors don't mean anything when  
they are tied to a sub 200MHz system bus.


Ignoring the time to connect issue in the Anandtech articles because  
it simply doesn't affect us, the Mac boxes really perform very very  
well for us (which is the only benchmark that means anything to us).


On the 64 bit side if you aren't planning on giving MySQL more than  
2GBytes of memory, it's not an issue to start with.


Best Regards, Bruce


On Sep 7, 2005, at 4:23 PM, Scott Haneda wrote:



on 9/7/05 2:40 PM, Bruce Dembecki at [EMAIL PROTECTED] wrote:




We're happy with our Mac based MySQL servers in many respects. We've
got some 64 bit issues that are causing a little grief, so we're
looking at our options... Obviously working with Apple and MySQL to
determine the real reason for the 64 bit failures will be high on  
the

list. At the same time we're going to take a look at Yellow Dog's
Linux for Power PC and then we can do a direct comparison and see
what differences a change in Operating System makes to us in real
life, not just what effect it has on Anandtech's benchmarks, which
don't represent how we work at all. Once we've looked at Yellow Dog
we'll be happy to talk about our experiences there too.




Ok, you are almost selling me on getting an Xserve, can you tell  
me a bit
about the 64bit issues and how they affect me?  I have someone who  
may just
donate my a xserve, one of the older ones, but still, not a bad  
piece of

hardware at all.
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INNODB memory allocation error during startup

2005-09-06 Thread Bruce Dembecki
We see that error when we go above the memory limits of a 32 bit  
operating system. When we run a 64 bit binary on our 64 bit hardware  
we can allocate more memory and it fires up fine.


I'm a Mac guy so your string of letters and numbers describing your  
hardware and MySQL version don't mean much to me... but assuming your  
6Gbyte machine has 64 bit support (otherwise why have a 6Gbyte  
machine) and assuming you have a 64 bit operating system  
installed are you using a 64 bit MySQL binary? I've got zero  
understanding of the different codes and letters and numbers for the  
Linux products or the Intel products etc.. but I would expect you  
need an ia64 version from mysql, maybe this one - http:// 
dev.mysql.com/get/Downloads/MySQL-4.0/mysql-standard-4.0.25-unknown- 
linux-gnu-ia64-icc-glibc23.tar.gz/from/pick - which seems to be the  
64 bit version of the one you are trying to use.


Best Regards, Bruce

On Sep 2, 2005, at 7:17 AM, eddie wrote:


Hello kind sirs,


I have a DL380 G4 box, Dual Xeon 3.0Ghz with 6GB of DDR2 Ram,
I'm Linux CentOS 4 with a 2.6.12.5 vanilla kernel, and a remote  
storage which I access via iSCSI (linux-iscsi-4.0.2 complied as a  
module) connected with two Gigabit interfaces. the 'balancing'  
between the two links is maintained by multipathd (mutlipath-tools  
0.4.4).


The situation is like this:

I had a DL380 G4 box with 4GB of DDR2 Ram that was running RedHat  
ES3-Update4 with stock redhat kernel and mysql 4.0.22 Intel-ICC  
precomplied binaries (downloaded from mysql.com - mysql- 
standard-4.0.22-pc-linux-gnu-i686-icc-glibc23.tar.gz ) the storage   
was a local Raid5 of 4 10K RPM 72Gb SCSI disks.


On the old box, everything worked fine, other than IO problems to  
the local raid, so I've moved it to the new box:


I've moved the data 'as-is' to the new iSCSI box, upgraded the  
MySQL binary to 4.0.25 (still, precomplied Intel-ICC binaries), and  
kept the my.cnf as it was on the previous box:


-- my.cnf --

innodb_additional_mem_pool_size = 200M
innodb_buffer_pool_size = 2G
innodb_data_file_path =  
ibdata1:10G;ibdata2:10G;ibdata3:10G;ibdata4:10G;ibdata5:10G;ibdata6:10 
G

innodb_file_io_threads = 4
innodb_thread_concurrency = 16
innodb_log_buffer_size = 8M
innodb_log_file_size = 500M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90

-- my.cnf --

The problem is that when I start MySQL I get the following error:

-- error log --

050902 16:06:12  mysqld started
InnoDB: Fatal error: cannot allocate 2097168384 bytes of
InnoDB: memory with malloc! Total allocated memory
InnoDB: by InnoDB 214666988 bytes. Operating system errno: 12
InnoDB: Cannot continue operation!
InnoDB: Check if you should increase the swap file or
InnoDB: ulimits of your operating system.
InnoDB: On FreeBSD check you have compiled the OS with
InnoDB: a big enough maximum process size.
InnoDB: We now intentionally generate a seg fault so that
InnoDB: on Linux we get a stack trace.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this  
binary
or one of the libraries it was linked against is corrupt,  
improperly built,
or misconfigured. This error can also be caused by malfunctioning  
hardware.
We will try our best to scrape up some info that will hopefully  
help diagnose
the problem, but since we have already crashed, something is  
definitely wrong

and this may fail.

key_buffer_size=16777216
read_buffer_size=1044480
max_used_connections=0
max_connections=200
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size) 
*max_connections = 1858782 K

bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=0x83a3008
Attempting backtrace. You can use the following information to find  
out

where mysqld died. If you see no messages after this, something went
terribly wrong...
Stack range sanity check OK, backtrace follows:
Stack trace seems successful - bottom reached
Please read http://dev.mysql.com/doc/mysql/en/ 
Using_stack_trace.html and follow instructions on how to resolve  
the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so  
please do

resolve it
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd-query at 0xa2a17e7d  is invalid pointer
thd-thread_id=1701522733
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
050902 16:06:12  mysqld ended

-- error log --

I've tried playing with 'innodb_additional_mem_pool_size' and  
'innodb_log_file_size' and the only thing that works is when I  
lower 'innodb_additional_mem_pool_size' to 1500M, anything higher  
than that doesn't work.


I've checked ulimits on the machine and it looks fine:

-- ulimit -a --
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
file size   (blocks, -f) unlimited

Re: Need explanation on string data size and storage requirement.

2005-08-23 Thread Bruce Dembecki


On Aug 23, 2005, at 10:20 AM, Sunil Vishwas wrote:


I was looking into the storage requirement for the various data types
and
got confused by following comment:

'For the CHAR, VARCHAR, and TEXT types, L and M in the preceding table
should be interpreted as number of bytes before MySQL 4.1 and as  
number

of
characters thereafter.'

Then I looked into other pages like
http://dev.mysql.com/doc/mysql/en/string-type-overview.html
http://dev.mysql.com/doc/mysql/en/string-type-overview.html  and got
more
confuse.

How it would be possible to use the same storage size to store the  
equal

number of characters and byte, if a character is not just a byte long?

For example UTF-8 based Unicode character may take 1 to 4 bytes.

There is the issue... before 4.1 the limit was a size, from 4.1  
onwards the limit is the number of characters... So clearly from 4.1  
onwards your table sizes may vary depending on your character set  
choices.


Look at it like this... If you have a limit of 8 bytes on a password  
field, and you convert your database to UTF-8 in 4.1 your password  
could be much larger than 8 bytes, and may be truncated if the limit  
were size based, instead it is character based. Allowing a full  
conversion of data from 4.0 to 4.1 if managed correctly.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migration from MySQL 4.0 to 4.1

2005-08-23 Thread Bruce Dembecki
Once you decide to use mysqldump, be aware that the quickest way to  
export/import large files is to use the --tab feature on export and  
mysqlimport to load the data...


Essentially:

On the old (4.0) server:

mysqldump --tab=/var/tmp/directory mydatabase

On the new (4.1) server (assuming you have a new empty mysql data  
directory with just your MyISAM based mysql database to ensure your  
permissions files are there):


mysql -e create database mydatabase;
cat /var/tmp/directory/*.sql | mysql mydatabase
mysqlimport mydatabase /var/tmp/directory/*.txt

Essentially you are creating a text .sql file for each table with the  
create table command, and a .txt file with the raw data in tab  
delimitted format... mysqlimport imports the whole data file as one  
SQL command, using traditional mysqldump you get a unique SQL insert  
command for each line of data... doing it once means only writing the  
indexes etc. once and other time saving advantages... it's far  
quicker to insert many rows of data as a single INSERT command, than  
it is to do it row by row. So if you have a large data set and you  
are doing the export/import thing, that is the way to go...


That said there is another option... in theory you can upgrade to 4.1  
keeping your shared table files, then tell each table to ALTER TABLE  
engine=innodb, this will force it to rewrite the table from scratch,  
and if you have innodb_file_per_table set, it will be created  
accordingly... The benefit here is your downtime is minimal but the  
problem is at the end of the day you are still left with your shared  
innodb table space, and even though it may be mostly empty, you can't  
clean it up and make it smaller.


Best Regards, Bruce

On Aug 23, 2005, at 6:19 AM, Rafal Kedziorski wrote:


Hi,

we have an J2EE application which ist using MySQL 4.0. There is an  
bug, which was fixed in MySQL 4.1. We are using tracactions and  
InnoDB is don't use query cache. Now we have to migrate our DB to  
MySQL 4.1 for use this feature. In our actual installation we store  
our data in one inndodb file. After migration we wan't use file per  
table.


What is the best and fastest way to make migration?


Best Regards,
Rafal


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: cannot drop database

2005-08-16 Thread Bruce Dembecki

On Aug 15, 2005, at 5:07 AM, Logan, David (SST - Adelaide) wrote:



Hi Gary,

If you are running unix (or variants thereof), you can go to the  
data directory and remove it at the operating system level if the  
mysql client can't do it. Not sure about windows though but I would  
think the same thing would apply.


If you do that and then do a show databases, it should be gone.




David is right in that once you do this it won't show in the show  
databases list any more, but if you use a data engine that stores  
data in a shared table space (such as InnoDB to name one) doing this  
deletes the database as far as MySQL is concerned, but the data is  
still sitting in the shared spaces with no practical way of getting  
it out and freeing your space.


The correct way will be to make sure you enclose the database name in  
backticks as previously mentioned in this thread.


Best Regards, Bruce



Regards


David Logan
Database Administrator
HP Managed Services
148 Frome Street,
Adelaide 5000
Australia

+61 8 8408 4273 - Work
+61 417 268 665 - Mobile
+61 8 8408 4259 - Fax


-Original Message-
From: Gary Huntress [mailto:[EMAIL PROTECTED]
Sent: Monday, 15 August 2005 9:09 AM
To: mysql@lists.mysql.com
Subject: cannot drop database

I need to drop a database named  ÃáãÃáà using the mysql client.   
I'm

getting you have an error in your sql syntax for the command

DROP database ÃáãÃáÃ;

I'm sure this is a character set issue.  How can I drop this database?


Regards,

Gary H.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: French Characters, Still no answer

2005-08-15 Thread Bruce Dembecki

Still no answer, perhaps, but ther'es still no question.

Per my earlier response... What version of MySQL is the old version  
you refer to, what version is the new version you refer to? With that  
information someone here is more likely to be able to tell you  
something useful... without that you're not likely to get much of a  
response.


Best Regards, Bruce

On Aug 15, 2005, at 7:59 AM, James Sherwood wrote:

I am still having trouble with french characters if anyone has ANY  
ideas, please help.


We have installed the newest version of MySql and cannot get it to  
play nice
with French characters.  Our older version worked fine.  The  
problem may (or
may not) be that when we put the dump into the new database(yes its  
default
charset is Utf8) the default character set for the table is Utf8  
but some

fields are like this:

'Story' longtext character set latin1 NOT NULL

We tried linking our tomcat to the old database on the other server  
through
this tomcat and everything works fine but when we link the tomcat  
back to
the new database, it will not play nice with french characters.  
(they come

out as outlined squares etc)

Any ideas would be greatly appreciated
James




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Character Set Question

2005-08-13 Thread Bruce Dembecki
Need more information... what exactly is Our older version, and  
what exactly is the newest version of MySql, without this it's hard  
to know what issues you may be facing... I imagine you are on 4.0.n  
for the old and 4.1.n for the new... but we can't really tell from  
the information you gave us.


When you run mysqldump you get an output file with everything in  
it... I suggest running mysqldump --tab=/var/tmp/somedirectory which  
will create a series of files in the location you specify, with  
a .sql file for each table with the create table command, and a .txt  
file for each table with the data in tab delimited format. This gives  
you an easy way to edit the create table statements to make sure each  
table has the character set information you really want in it before  
you import the data. Then you can do the import using cat *sql |  
mysql database to create the tables, and run mysqlimport against  
the .txt files to insert the data. Using this process you can more  
precisely manage your tables so they have the right character set for  
each column... you can do it by editing your regular mysqldump output  
file, but it's a big file and this way is just easier... it's also  
quicker to do the import this way.


Be sure to dump the old database using the old mysqldump, that way if  
there was no character set information it won't put something in  
there by mistake (the new mysqldump could insert something of it's  
choosing if there is nothing defined)... be sure to use the new mysql  
client and mysqlimport to insert the data into the new version,   
making sure to use an appropriate --default-character-set setting  
each time you call it.


Best Regards, Bruce

On Aug 12, 2005, at 4:24 AM, James Sherwood wrote:





Hello,

We have installed the newest version of MySql and cannot get it to  
play nice
with French characters.  Our older version worked fine.  The  
problem may (or
may not) be that when we put the dump into the new database(yes its  
default
charset is Utf8) the default character set for the table is Utf8  
but some

fields are like this:

'Story' longtext character set latin1 NOT NULL

We tried linking our tomcat to the old database on the other server  
through
this tomcat and everything works fine but when we link the tomcat  
back to
the new database, it will not play nice with french characters.  
(they come

out as outlined squares etc)

Any ideas would be greatly appreciated
James



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]












--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql command line execution

2005-07-28 Thread Bruce Dembecki
You *COULD* include the information in the my.cnf file under the  
[client] area, something like this:


[client]
user=bruce
password=brucesPassword

That would tell the client to use that unless something else is  
disabled.


Of course that needs to be saved in plain text in a plain text file  
somewhere where people could get to it, so it may not be much of an  
improvement...


However there are several areas that my.cnf can be stored, so there  
may be some opportunities here... Given that one of the places a  
valid my.cnf file can exist is the users home directory (where it  
would be called something like ~.my.cnf and is somewhat harder to see  
because of the leading dot) you could setup a user specifically for  
handling such tasks in your server's account management system.  
Probably avoid making such a user on a network user management system  
such as LDAP or NIS or anything, but you can build a local account  
for this user.  Assign this user a home directory, and set  
permissions restrictions on the home directory and the .my.cnf file  
so other users can't access it. Then you could su to this user and  
create a crontab to execute your scripts... because you will be this  
user your mysql command line client would read your .my.cnf file and  
use that username and password unless told otherwise by the command  
line calling mysql.


That said I stress again... it is still a plain text file and the  
password is saved in readable text... if you forget to set enough  
permissions to prevent other users from accessing the file or  
something you can run into trouble. I wouldn't consider it secure,  
but it's better than including the password in the scripts all over  
the place. You other users would need to get into this new phantom  
users home directory, find the file and read it... because the file  
is called .my.cnf it won't show on ls unless someone does an ls -a  
and then only if they have permissions to access that directory -  
given you will probably give the home directory in question  
drwx-- permissions only someone logged in as that user (or root)  
should be able to access the directory and see whats in it, and the  
file will need otbe readable by the user, so it needs at least - 
r permission, probably not much more than that.


Best Regards, Bruce

On Jul 28, 2005, at 7:09 PM, Jeff Richards wrote:


Hi,

Is there a secure way of running mysql commands against the db from  
the

command line, or in some kind of secure batch mode, without making the
password totally visible? We need to procedurize things like flush
tables with read lock, unlock tables etc. Is making the password
visible on the command line the only way?

Thanks,

Jeff

--
Jeff Richards
Consulting Architect
Openwave Systems Asia Pacific
+61 415 638757


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Correct way to use innodb_file_per_table?

2005-07-26 Thread Bruce Dembecki


On Jul 26, 2005, at 3:56 AM, Marvin Wright wrote:



Regarding the file size issue, we are on a 32-bit system running  
redhat AS3,
we already have idb files in excess of 21Gb, I'm not sure what the  
limit is

though if any ?

No, typically a 32 bit file system would have limits like 2G or 4G...  
at 21G already I doubt you'll have a problem... That said I know  
little more about Linux than how to spell it, so I'm not the best  
person to give specific information on that.


Just one thought about the shared space, do you think it would be  
possible
to back up all the current shared data files along with the iblog  
files,
change the my.cnf file to use a single ibdata file of 2 gig, then  
try to
start it up.  If it did fail how about reverting the my.cnf and  
restoring
the original ibdata and iblog files, would it still work after  
restoring the
original files ?  With this I could test Heikki idea without the  
pssibility

of losing data.

Yes, if you start MySQL without the files (simply moving them to a  
holding directory) and it doesn't come up or can't find the data  
after it boots you can always put the files (and your original  
my.cnf) back and go on from there. At least that's how it's supposed  
to work :-)


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Correct way to use innodb_file_per_table?

2005-07-25 Thread Bruce Dembecki


On Jul 25, 2005, at 5:33 AM, Marvin Wright wrote:


Hi,

Thanks for your reply.

I've only just moved all tables to there own table space so that I  
can put

certain databases on different disks.
Right now my shared tablespace does not hold any databases.
I'm aware that I still need the shared table space but I don't need  
200gb

now, I just want to decrease it down to 10Gb.

It seems a bit daft that I still have to dump all tables even when  
they are
in their own tablespace.  I guess this is because the table  
definitions are

still stored in the shared space.

Marvin.

Hi! These are good questions... Heikki once told me that if there is  
no activity going on AND the innodb status page shows nothing being  
processed AND everything is up to date according to the innodb status  
page, you could (in theory) shutdown mysql and bring it back with a  
new shared table space under these circumstances... That is going to  
require that every connection to the database server be idle, or  
better still shut off... Depending on how your machines access your  
database server that may be easy or hard to do...


We had some character set issues to work on and were (are - it's an  
ongoing project) needing to do a dump and an import to do the move  
from 4.0 to 4.1 at the same time... So we didn't actually try and  
bounce a server into a smaller shared table space live... I have  
total control over my client connections to the database server and  
can easily prevent them from connecting with a hardware load  
balancer, and I'm still not sure I would want to try that though.


Hint if you are going the dump and import route... The fastest way to  
dump and for sure the fastest way to import is to use mysqldump -- 
tab=/var/tmp/somewhere and use mysqlimport to import the tab  
delimited data... using --tab on the dump creates two files for each  
table.. an sql file with the create table statement, and a txt file  
with the tab delimited data... We create our databases using cat /var/ 
tmp/somewhere/*sql | mysql ourDatabase, and then use mysqlimport  
ourDatabase /var/tmp/somewhere/*.txt - mysql import is smart enough  
to insert data into tables matching the filename, it's the fastest  
way to do the whole dump and import thing by a lot.


On the issue of how much shared space, Heikki told me 200Mbytes would  
be far more than we would need if everything is  
innodb_file_per_table... but as my old file space was made with 2000M  
files I just kept ibdata01 and commented out the rest of the line  
certainly haven't any issues with the 2Gbyte shared table space, I  
would think 10G would be overkill (I think my 2G is overkill).


The only other area we discovered was an issue is that if you are  
running a 32 bit file system there is likely to be a a problem on any  
table that needs more file space than the file system will give a  
single file. The solutions here are to use a 64 bit file system which  
doesn't care so much, or create a larger shared table space and turn  
off innodb_file_per_table and alter the table to innodb (even if it  
is already innodb, altering it like this will recreate it new). turn  
on innodb_file_per_table again and that table will stay in the shared  
table space, the rest will be in their own files. the main problem  
here is that once the file reached the OS limit InnoDB thought the  
table was full(which technically it was)... so Innodb's autoextending  
files don't know how to launch a second file once the File system's  
upper limit has been reached.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Phone Number Storage

2005-07-25 Thread Bruce Dembecki


On Jul 25, 2005, at 1:23 PM, Sujay Koduri wrote:


I guess anywhere we have 3 levels of hierarchies for a phone number.
(Country code, Area code and the actual number).
The advantage of seperating them into different columns(Either an  
integer or
a string) is that he can group different phone numbers based on  
area code or

country code.



The key issue is less separating them into columns, but more one of  
global use... As Joerg was saying, many countries have area codes or  
even phone numbers that start with a zero - 0. If you store phone  
numbers as an integer the leading zero will be dropped, thus meaning  
you are storing incomplete data.


It does also in most applications make sense to store the area code  
and country code as separate strings... but don't fall into the trap  
of thinking a zero at the front of a phone number or an area code and  
even some country codes isn't important.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why so many table scans?

2005-07-25 Thread Bruce Dembecki


On Jul 25, 2005, at 3:47 PM, Chris Kantarjiev wrote:



link_id is indexed. There are about 8 million rows in the table,
and most of them have link_id = null right now. latitude and longitude
are not indexed - but my understanding is that mySQL will only
use one index, and link_id is the interesting one for us.

Are the latitude and longitude qualifiers the cause of the table  
scans?



Yes, they almost certainly are the cause of the problem. A query may  
only use one index, but the table can have several, and the MySQL  
Optimizer will choose the most appropriate index for the query it is  
running. In a case such as this where most entries have a null  
link_id you are hurting from having no index covering the other  
columns. If this was all the table was going to do and all your  
queries were of this form, you could make an index on link_ID,  
latitude, longitude... as long as most everything is null it's the  
equivalent of using an index on latitude, longitude... but as things  
change (I assume you don't expect them to stay null) your one index  
will accommodate that...


However... when you say what's important to you is the link_id, I  
assume you mean that's what is important in the result... not what is  
important in the search itself (as it clearly isn't now if they are  
mostly null). The thing is to remember, while a query may use only  
one index, a MySQL table can have many (don't go nuts here), so by  
adding an index for latitude, longitude you are buying yourself a  
bunch of performance.


Beware of course... too many indexes or too complicated and they can  
be a performance issue in their own right. The trick is to put in the  
right indexes for your data and for your queries, don't just add  
indexes for indexing sake.


Best Regards, Bruce


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



4.1.13 OS X MAJOR I/O Degredation

2005-07-22 Thread Bruce Dembecki
So it appears I am having an issue with 4.1.13 which I'm guessing is  
a bug... wanted some input before I file it...


Setting up a new machine to take over for an old one, so it's clean,  
Operating System and some empty disks... the server does nothing  
other than MySQL so there are no other processes running. It has  
16Gbytes of ram and the data disks are a 7 disk RAID5 array on a  
2GBit/Sec Fiber Channel connection.


If I create my data directories and copy the mysql database from  
another server with a simple copy (mysql is myisam so it's no issue)  
I am ready to launch mysqld... When mysqld launches it of course  
needs to create my InnoDB data files and log files before it comes up...


I first did this under 4.1.13 Community edition and was SHOCKED by  
the results... one 2Gbyte shared data file for InnoDB, and 2 250Mbyte  
log files... what felt like an hour later it finished... Tried 4.1.13  
Pro released today... same thing... Tried 4.1.12, better, still  
slower than I would expect, but better... let me quantify that a  
little. From the log files below you will see that the time to create  
the InnoDB files and get to the point of being ready to connect is:


MySQL 4.1.13 Pro:54 minutes 51 seconds
MySQL 4.1.12 Standard: 4 minutes 16 seconds

While I didn't keep a 4.1.13 Standard log, it's pretty much the same  
as 4.1.13 Pro.


If I copy the 2Gbyte file once it is created, it can be duplicated in  
22 seconds... so disk performance on it's own isn't the issue. Any  
Disk I/O I try and do on the machine during the hour long lock out is  
also degraded... copying another file for example can take a very  
long time... 3 minutes for a 35Mbyte directory... copying the same  
file after MySQL has launched takes no time at all.


I duplicated this exact same thing with another server... times don't  
change much here, so it doesn;t seem to be specific to this one machine.


This is all on Mac OS X 10.4.2

Best Regards, Bruce


[data-admin:/var/mysql] root# tail -f mysql.err
050722 13:51:08  mysqld started
InnoDB: The first specified data file /mysqldata/ibdata01 did not exist:
InnoDB: a new database to be created!
050722 13:51:09  InnoDB: Setting file /mysqldata/ibdata01 size to  
2000 MB

InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100  
1200 1300 1400 1500 1600 1700 1800 1900 2000
050722 14:34:45  InnoDB: Log file /mysqldata/ib_logfile0 did not  
exist: new to be created

InnoDB: Setting log file /mysqldata/ib_logfile0 size to 250 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200
050722 14:40:15  InnoDB: Log file /mysqldata2/ib_logfile1 did not  
exist: new to be created

InnoDB: Setting log file /mysqldata2/ib_logfile1 size to 250 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: Creating foreign key constraint system tables
InnoDB: Foreign key constraint system tables created
050722 14:45:59  InnoDB: Started; log sequence number 0 0
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.1.13-pro-gpl-log'  socket: '/tmp/mysql.sock'  port: 3306   
MySQL Pro (GPL)



[data-admin:/var/mysql] root# tail -f mysql.err
050722 14:58:06  mysqld started
InnoDB: The first specified data file /mysqldata/ibdata01 did not exist:
InnoDB: a new database to be created!
050722 14:58:06  InnoDB: Setting file /mysqldata/ibdata01 size to  
2000 MB

InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200 300 400 500 600 700 800 900 1000 1100  
1200 1300 1400 1500 1600 1700 1800 1900 2000
050722 15:01:22  InnoDB: Log file /mysqldata/ib_logfile0 did not  
exist: new to be created

InnoDB: Setting log file /mysqldata/ib_logfile0 size to 250 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200
050722 15:01:47  InnoDB: Log file /mysqldata/ib_logfile1 did not  
exist: new to be created

InnoDB: Setting log file /mysqldata/ib_logfile1 size to 250 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100 200
InnoDB: Doublewrite buffer not found: creating new
InnoDB: Doublewrite buffer created
InnoDB: Creating foreign key constraint system tables
InnoDB: Foreign key constraint system tables created
050722 15:02:22  InnoDB: Started; log sequence number 0 0
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.1.12-standard-log'  socket: '/tmp/mysql.sock'  port:  
3306  MySQL Community Edition - Standard (GPL)


After MySQL finishes creating the files and is running but sitting  
idle, duplicating the 2Gbyte file takes 22 seconds, so it is not a  
disk performance issue:


[data-admin:/var/mysql] root# time cp /mysqldata/ibdata01 /mysqldata/ 
ibdatacopy

0.016u 5.571s 0:22.67 24.6% 0+0k 16+24io 0pf+0w


--
MySQL General Mailing List
For 

Re: Backups on high-availability servers

2005-07-22 Thread Bruce Dembecki

I would really like to hear how some of you are handling backups on
high-availability servers.  The DBA in my company is skeptical about
switching from MSSQL Server to MySQL, this is one of his reasons
(backups).  If someone is making MySQL work in a high-availabity
environment, let's hear about it!

Thanks for any input,
-Ryan

So... where to begin. We're truly managing our databases for high  
availability... so much so that we actually have hardware load  
balancers between our database servers and our application servers...  
although that's a little extreme. Ignoring the load balancers here's  
what we do:


Production database pair replicates between each other, ie each  
server in a pair has the other one as it's replication master. We  
deploy several pairs of database servers this way, each pair has it's  
own data and the other pairs don't need to know about it.


This addresses several issues... firstly by running in pairs we can  
deploy our apps servers in pairs also (or fours or sixes etc...), so  
apps1 speaks to data1 and apps2 speaks to data2... if something  
happens to data1, it takes out apps1 (except in our case where the  
hardware load balancer reconnects apps1 to data2)... but whatever you  
are doing with your apps servers in a high availability situation  
should be able to detect apps1 is in trouble and fail it out of  
production anyway. Also because replication is so fast (not  
instantaneous though, but close) it means we have current data on two  
servers all the time... if something happened to a disk array or  
something, we'd still have the data.


From there we have what I call our mysql-admin server... this  
handles the non production tasks... let me explain. We are doing both  
High availability and high volume... this means we need the  
production servers to be fast. Because the volume is high that means  
things like reports can cause quite a drag on the database servers...  
however the reports are only seen by us and our clients, not by all  
the users. Running reports usually puts a dent in performance on the  
database side. For that reason we have a separate server that handles  
this.


So our mysql-admin server connects to each database pair and  
replicates the data from all of them... so we have several instances  
of MySQL running on mysql-admin. This machine also has a large  
storage array attached to it. It is here we do things like run  
reports and do backups... it can be done without affecting production  
servers at all. I have a series of scripts which run in Cron that  
stop the replication process, and simply copy the data files to the  
array. Replication is started again and the scripts go on to do  
things like compress the data files and so on...  It's very fast and  
totally painless on the production servers.


I realize that not everyone is in a position where they can throw  
hardware load balancers between their apps servers and their database  
servers, nor can they dedicate an extra server to handle the admin  
tasks for the databases. That said we also rely exclusively on  
InnoDB... If you're needs are truly High Availability you probably  
will too. You don;t need a separate server to handle backup with  
InnoDB and you don't need to take a live Server out of the loop.  
InnoDB offers what they call InnoDB Hot Backup... which is a program  
you can buy that will handle the backup while the server is live and  
store all the changes during the time of the backup in it's logs...  
it's very effective and works very well. We have used InnoDB Hot  
backup and like it very much. The only reason we switched away from  
using it is because we could... we changed the way our Admin server  
runs now and it is always deliberately not reading replication  
changes from the production servers for 1:50 of every 2:00 hours...  
so it's easy for us to just do a file system copy of the database  
files... it's a LITTLE easier for us to recover from that without  
using InnoDB Hot Backup... so we do it that way... but InnoDB Hot  
Backup works great.


Not using InnoDB, or can't use InnoDB.. then your choices are a  
little harder. I don't know about other solutions.. but the  
production server/admin server setup works great regardless of what  
your database engine is... if you have a server you can do it on :-)


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb_file_per_table table status

2005-07-22 Thread Bruce Dembecki

Hi,

I've just converted my databases so they are using per_table  
tablespaces.
When I do a show table status in that database the Comment line  
still shows
the amount free, but I assume this doesn't mean anything now ??  It  
doesn't

make sense ?




As innodb_file_per_table makes individual files for each table and  
each of these files is an autoextending file the information here is  
somewhat limited in value. If there is space available it's usually  
because the table needed to be that large at some point, and since  
that point some of the data has been removed. If InnoDB needs more  
space than that it will just grow the table's space on disk a little  
at the time it needs it. So you don't need to worry about this  
generally speaking.


There is an exception to this however... InnoDB can not grow the  
table file larger than the Operating System's limits of largest file  
size. For example if you can only have a 4Gbyte file on your  
Operating System, InnoDB will start complaining the table is full  
once your file reaches that size and the space inside the file has  
been used. If you do have such a table you will be in trouble unless  
you moved it to the shared table space which can exist across several  
files, or to MyISAM or some other table type which keeps it's indexes  
and data in separate files.


Genrally speaking though, for most people, the space available  
display doesn't mean much when using the auto extending files created  
when you switch to innodb_file_per_table.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: migrate from 3.x to 4.1 character set problem

2005-07-18 Thread Bruce Dembecki

hi,
we've got an old mysql-3.23.58 and a new mysql-4.1.10a and we'de  
like to

migrate our data, but it doesn't seems to be so easy:-(
out old server has a latin2 database. after we dump it and try tp  
import
into the new ones we always got errors or the spical accented  
hungarian

characters are getting wrong.
- what is the prefered (and working) way to migrate from the old to  
the

new?
- how can define the new char sets?
we try these variations (and manualy create the database with defult
char set and latin2):
1. mysqldump --opt -p xxx  xxx.sql
mysql xxx  xxx.sql

2. mysqldump --opt --default-character-set=latin2 -p xxx  xxx.sql
mysql --default-character-set=latin2 xxx  xxx.sql

3. mysqldump --opt -p xxx  xxx.sql
iconv -f ISO_8859-2 -t UTF-8 -o xxx2.sql xxx.sql
mysql xxx  xxx2.sql

and many more combination, try to read all docs, but can't find any
solutions.
another question what is the collations latin2_hungarian_ci contains?
how can i interpret that xml file? eg. a is equal to á or not? is  
there
any way to find out how is the buildin contains defined? or any  
description?

thank you for your help in advance.
yours.


Generally speaking you need to define the character set for each  
column or table in your 4.1 database, or set a default character set  
for the database or for the server, this is independent of the  
default character set used by the clients... Then you need your  
clients to connect to the database using the appropriate character  
set... while the examples above seem correct, there are some  
opportunities for errors to occur.


Firstly export the data using the mysql tools provided with  
3.23.58... eg make sure you use the mysqldump that comes with the  
3.23.58 mysql binary - chances are that is will be mysqldump 3.23.58.  
I expect that version off mysqldump will not support the --default- 
character-set flag and should have thrown an error if you try to give  
it that flag... It's important that you export the 3.23.58 data the  
way it is, and let the 4.1 tools deal with putting it into the new  
format appropriately. using mysqldump from 4.1 may not give you  
exactly the same results, so you should avoid that. Also for what it  
is worth you may want to try doing a dump slightly differently... we  
always use --tab=/var/tmp/database or some such thing and that  
creates a series of files in the folder you specify, one .sql file  
for each table containing just the create table statement, and  
one .txt file for each table containing just the data for each table  
in tab delimited format. It means your import process will be  
slightly different, but it's faster, and because we have done it  
regularly it's more likely to handle the data conversion.


Next when doing the import make sure you use mysql tools that match  
the database you are installing. Here you will need to specify the  
default character set for the clients, they will understand and use  
that when speaking to the database. Here is the process we use to do  
the export from 4.0 and import into 4.1, there should be no great  
difference in how 3.23.58 and 4.0 handle the character sets so the  
results should be much the same. We use UTF8, and our 4.0 databases  
had no special character settings, so it was stored in the database  
as latin1. On the original server using 4.0.n server and tools to  
match we run this:


mysqldump --tab=/var/tmp/database database

You should be able to do the same thing provided you use mysqldump  
3.23.58, again make no allowances for character set in the dump  
process, you just want the data dumped to disk the same way it is  
stored now.


Then we move the directory /var/tmp/database to /var/tmp on the new  
server with 4.1 running... note this has the 4.1.n server AND the  
4.1.n tools (such as mysql, mysqldump, mysqlimport and so on).


Finally we go ahead and import our data into the server using this  
sequence of commands (we use a shell script, so that's what you get  
here). Call the shell script by giving it the database name as a flag  
(eg ./import database) - watch for differences in line breaks caused  
by email clients here, there are three lines of commands after  
setting DB=$1.


#!/bin/sh
#
# LiveWorld's MySQL Import Script
# Use for converting 4.0 databases to 4.1 UTF8 databases
# Suitable for LiveWorld Servers only, use at your own risk
#

DB=$1

mysql -e CREATE DATABASE $DB default character set utf8;
cat /var/tmp/$DB/*sql | mysql --socket=/tmp/mysql.sock $DB
mysqlimport --default-character-set=utf8 $DB /var/tmp/$DB/*txt

Obviously you are going from latin2 to latin2 so it should be a  
little easier for you than it was for us... and you'll want to make  
some changes in the script compared to our utf8 stuff (of course you  
may want to just go with utf8 anyway, should handle most anything you  
want to throw at it that way, our databases run in 30 languages).


So be careful to match your tools with your server version and try to  

Re: Switching legacy tables from 4.0 to 4.1 -AND- from default to UTF-8 .... ADVICE ANYONE?

2005-07-17 Thread Bruce Dembecki

On 7/15/05, Bruce Dembecki [EMAIL PROTECTED] wrote:
 This process has worked for us taking our latin1 4.0 databases and
 turning them into utf8 4.1 databases. UTF8 data we had already  
put in

 our 4.0 database despite it's latin1 encoding was correctly exported
 out of 4.0 and correctly converted on it's way in to 4.1


Wow!  Really?  This part amazes me.

So the MySQL importing process seems to do the converting of the
special characters into the newly-defined encoding format?  (from
latin1 to utf-8 in my case)

See - we do webhosting for clients around the world, and right now our
default-encoded MySQL 4.0 databases have Swedish and Hebrew characters
in them.

I'm concerned that if I dumped them as latin1/default in 4.0, but then
imported as utf-8 in 4.1 that the non-ASCII characters would get
imported as the wrong encoding.

(Assuming, yes, that I would set our new 4.1 databases to do ALL utf-8
in the /etc/my.cnf and gladly take the small performance/size hit.)




Oh, and one more thing, the conversion worked for us in our Hong Kong  
boards where they have a lot of Japanese speakers, the Chinese  
speakers of course, and a lot of English messages, and of course even  
all Chinese messages with email addresses in regular text... so yes,  
not only did it work for us, it worked for us with a multitude of  
different character sets in the very same table (even in the same  
column). Gotsta love utf8.


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Separate disk for logs, DRBD ...

2005-07-17 Thread Bruce Dembecki

Dear All,

Am planning on making MySQL write its data files to disk1 and log  
files

to disk2.

My questions are :

1. I know I can put the connections, slow, query, and InnoDB logs on
disk2.

Is it also possible (and advisable) to put the binary logs with  
them

?

We log to the OS Disk, and keep data on the data disks... Binary logs  
go to the log disks. The exception here is our InnoDB logs, in the  
event of a crash innodb needs them to rebuild itself, so they go with  
the data disks... but that's just us.



2. If disk2 is slower than disk1 ( like when disk1 is 15k RPM while
disk2 is 10k RPM ), will it slow down any data-related operations ?

May be an issue for InnoDB logging, unless you move the InnoDB logs  
to data disk like we do. Not an issue for Binary logs etc, they are  
handled by their own threads.



3. I'm thinking of using DRBD to replicate changes on one MySQL Master
server to another box. Does anyone here have a similar setup ?

I plan on buying 2 identical servers with 3 disk each - 1 for the
OS, the other for Data, and the last one for Logs.

Don't know a DRBD, so can't speak to that, but I can say what we  
did with three disks and why...


One Disk for OS and logging... two disks mirrored for Data - now the  
why. Firstly we are old fashioned when ti comes to IT type stuff,  
everything is built redundant... mirrored disks give us a level of  
protection for our data. Next is performance... Mirrored disks in  
most Operating Systems (including ours) will read from both disks  
like a striped disk, so reads are pretty much twice as fast as a  
single disk. Clearly writes take normal amounts of time. So we get  
redundancy and double the read performance by using two disks  
mirrored for data... The OS isn't using much disk IO so having logs  
on a different disk than OS seems like you're not buying much in most  
cases.


(As a side note we also use a hardware RAID card to run the mirror,  
rather than the Operating System... that way there's no performance  
hit on the OS in writing to the mirror, but again that's just us).


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Switching legacy tables from 4.0 to 4.1 -AND- from default to UTF-8 .... ADVICE ANYONE?

2005-07-16 Thread Bruce Dembecki

On 7/15/05, Bruce Dembecki [EMAIL PROTECTED] wrote:
 This process has worked for us taking our latin1 4.0 databases and
 turning them into utf8 4.1 databases. UTF8 data we had already  
put in

 our 4.0 database despite it's latin1 encoding was correctly exported
 out of 4.0 and correctly converted on it's way in to 4.1


Wow!  Really?  This part amazes me.

So the MySQL importing process seems to do the converting of the
special characters into the newly-defined encoding format?  (from
latin1 to utf-8 in my case)

See - we do webhosting for clients around the world, and right now our
default-encoded MySQL 4.0 databases have Swedish and Hebrew characters
in them.

I'm concerned that if I dumped them as latin1/default in 4.0, but then
imported as utf-8 in 4.1 that the non-ASCII characters would get
imported as the wrong encoding.

(Assuming, yes, that I would set our new 4.1 databases to do ALL utf-8
in the /etc/my.cnf and gladly take the small performance/size hit.)



OK, this is where we are at, and what it does for us... I can't speak  
to where you are at and what it will do for you, however, we have this:


4.0 data everything is encoded by MySQL as Latin1. However the web  
app and the JDBC both speak UTF8, so the data being given to the  
database is UTF8 data which the database then stores in it's Latin1  
table. When the JDBC extracts the data from 4.0 and gives it to the  
web app it displays as we would expect in the language it was  
entered... Was very cool and worked great for us.


Then we went to 4.1... just changed the Binary, expecting things to  
work the way they did... wrong. Suddenly all our databases with  
strong non latin1 character sets were in trouble. Our Chinese boards  
were a mess, as were our German boards... even our Australian board  
which should have been fine was a mess because one of the areas was  
titled Australia Café! and the word was in there a lot. Took about  
three months and many hours with MYSQL folks to figure it out. The  
upshot however was if you went about the conversion by doing a dump  
and an import, and made sure to tell the import to treat the specific  
columns as UTF8, everything arrived in fine shape. (There was one set  
of data loss because it turned out one of the Chinese boards was set  
to Big8 and the JDBC was set to UTF8 and the database Latin1... Get's  
to a point where there's only so much encoding it can take)


That said. the reality is if your application is anything like most  
the majority of the columns are for the application to work with,  
very few actually deal with text that needs to be encoded. The rest  
are time stamps, id#s, references to help you track the data...  
there's likely not much if any benefit to having them UTF8 encoded.  
For us we have 80 databases with almost that many tables with many  
columns.. figuring out one by one which way is up on everything  
represents a challenge. We're going to go back and do it now. But if  
I had one or two databases, I'd start by leaving the database the  
default (Latin1) and doing the database create and importing the .sql  
files (which creates the tables), then go through and run ALTER TABLE  
a few times and set the specific columns to UTF8... do the import the  
same as in my script... Latin1 text encodes in UTF8 to... Latin1  
text, so you shouldn't have any problem, but I'm guessing here of  
course. Dump your data to disk and bring it up on another server, try  
different things and fidn out what works best for you before you do  
the real import.


It SHOULD be pretty painless if you manage the import and use the  
mysql tools with the appropriate flag set for character sets on the  
way in... the only thing that will be painful about the process is if  
you just upgrade the binaries and expect the thing to work the way it  
did before, then you'll have some pain :-)


Best Regards, Bruce
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Switching legacy tables from 4.0 to 4.1 -AND- from default to UTF-8 .... ADVICE ANYONE?

2005-07-15 Thread Bruce Dembecki

I've got some years-old MySQL databases mostly in 4.0, but one server
running 3.23 that are all using the default encoding.

I want to update all their data to 4.1 with UTF-8 encoding.

Anyone done this kind of dump-and-update?Any advice to share or
good URLs you've seen with others' advice about this?




Hi! We have been going through this process since January... the  
learning curve was steep and the resources hard to find. At the end  
of the day it's quite simple unless there are weird things already in  
your database... here's the process we use:


On the original server using 4.0.n server and tools to match we run  
this:


mysqldump --tab=/var/tmp/database database

Then we move the directory /var/tmp/database to /var/tmp on the new  
server with 4.1 running... note this has the 4.1.n server AND the  
4.1.n tools (such as mysql, mysqldump, mysqlimport and so on). Also  
note we have this line (amongst others) in our 4.1 my.cnf file:


[mysqld]
default-character-set=utf8

The upshot of this is that by default all new databases and tables  
will automatically create themselves with utf8 as the default  
character set unless told otherwise. For us this was important  
because we have MANY databases with MANY tables with MANY columns,  
and going through and setting the character set for each and every  
database/table/column was prohibitively expensive. HOWEVER... give  
some thought to this, the reality is for our setup there are maybe  
two or three columns in two or three tables in each database that  
really need to be UTF8 - most of the data doesn't need to be encoded  
this way, and there are some overheads to having everything encoded  
in utf8. But in the interests of time and quick conversion, we did it  
this way, we are reassessing it and may change things before we  
convert the rest of the databases. If you do decide to do this you  
may want to setup your mysql database using latin1 before setting the  
default for everything on the server to utf8... things like username/ 
hostname/password with 16 character varchar column type when  
converted to utf8 allows 16 bytes, and not 16 characters, and since  
utf8 allows multibyte characters you may only get 5 characters in  
your usernames etc... so there are little gotchas to setting the  
default character set for the whole server to utf8, and if you do,  
configure the mysql database separately on it's own.


OK, having set the default character set for everything on the server  
to utf8 we go ahead and import our data into the server using this  
sequence of commands (we use a shell script, so that's what you get  
here). Call the shell script by giving it the database name as a flag  
(eg ./import database) - watch for differences in line breaks caused  
by email clients here, there are three lines of commands after  
setting DB=$1.


#!/bin/sh
#
# LiveWorld's MySQL Import Script
# Use for converting 4.0 databases to 4.1 UTF8 databases
# Suitable for LiveWorld Servers only, use at your own risk
#

DB=$1

mysql --socket=/tmp/mysql.sock -e CREATE DATABASE $DB;
cat /var/tmp/$DB/*sql | mysql --socket=/tmp/mysql.sock $DB
mysqlimport --default-character-set=utf8 --socket=/tmp/mysql.sock  
$DB /var/tmp/$DB/*txt



If you choose (probably wisely) not to set the default character set  
for the server to utf8 you can achieve the same result by making the  
first execution line of the above script to look like this:


mysql --socket=/tmp/mysql.sock -e CREATE DATABASE $DB default  
character set utf8;


Which will still have the affect of making your newly imported  
database use utf8 everywhere


This process has worked for us taking our latin1 4.0 databases and  
turning them into utf8 4.1 databases. UTF8 data we had already put in  
our 4.0 database despite it's latin1 encoding was correctly exported  
out of 4.0 and correctly converted on it's way in to 4.1, we don't  
loose anything along the way. Just again though I need to restate..  
things like:


username varchar(75) binary NOT NULL default ''

take on a new meaning under utf8, it's no longer 75 characters, but  
75 bytes, and utf8 encoded data takes more bytes. As well as  
potential data issues where you expect something to be 8 characters  
and it's really 24 bytes so having a varchar(8) may break new data  
inserts. There are also disk space issues that come out of this, and  
of course if you triple your disk usage there may also be new  
performance issues. Our recommendation is to do the import as above  
(this way you are sure to get your utf8 data in to the database the  
right way) and then go through (by script potentially) and convert  
the tables and columns that don't really need to be utf8 back to  
latin1... which is what we are looking at doing.


Hope this has been a little helpful :-)

Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Using START SLAVE [SQL_THREAD] UNTIL syntax

2005-07-12 Thread Bruce Dembecki
Good issue, I totally had the same concerns, so we built our own  
system. As a side note we run an admin server which we use to  
generate reports, run backups and so on - it takes load off the  
production servers, where speed is critical. Recovery from backup  
however is a whole other issue, and a corrupting error gets  
replicated very quickly - With the new system we use we have cut  
recover from backup from 4 hours to 30 minutes, and the data is newer.


Our approach is a little different, but basically the same. In an  
ideal world I'd like to see MySQL add a feature to replication that  
allowed me to set a variable to control how quickly the SQL_THREAD in  
replication executes it's queries... eg a variable such as  
replication-delay=3600 could tell MySQL's replication thread to hold  
off executing any command until the time is 3600 seconds beyond the  
timestamp in the binary log.



Some of my questions:
1) What are the benefits to using relay_log_file and relay_log_pos
instead of master_log_file and master_log_pos?  that the slave
binlogs would already exist locally?  Perhaps that's good or bad?
thoughts?


Relay logs are better to use for this for one major reason - Assuming  
one of the reasons this server exists is to provide backup to the  
primary then having the data copied from the master server to the  
slave server provides a copy of the logs where you need them in the  
event of a hardware failure on the master - or in other words, if you  
manipulate the slave's SQL_THREAD and keep the IO_THREAD running you  
are copying your data pretty close to instantly, so you have it  
somewhere else. Most people would put that in the good category.  
the solution I proposed above does this too, keeping the data copied  
off the master all the time. Our home built set of scripts doesn't do  
this, we manage the process through controlling the IO_THREAD, it's  
easy, and we have two primary servers running as a pair so we have a  
live backup on a separate system. We would however prefer to have  
the data in the relay log current all the time, and manipulate the  
SQL_THREAD.




2) Has anyone done something like this?

Yes.. our like this is simple scripts to start and stop the  
IO_THREAD and SQL_THREAD at certain times, run by cron. Specifically  
the sequence (which repeats every two hours) goes like this:


4:00pm stop SQL_THREAD
4:01pm flush logs, start IO_THREAD
4:05pm stop IO_THREAD
4:10pm start SQL_THREAD

The net impact of this is that the data on the admin server is on the  
low side 5 minutes behind live, and on the high side a little over 2  
hours behind.


We also set all the start scripts to first check for the existence  
of a file (/var/mysql/replicate) and if the file doesn't exist, don't  
start anything... and we have a script that stops all replication and  
nukes the test file, which can be run on the machine and is actually  
tied to a tcp port so all we have to do is hit a specific port with a  
telnet connection or a web browser and replication is immediately  
stopped and won't be started again by cron until the file get's put  
back, so we have managed the emergency stop issue (thus we felt  
comfortable with a 5 minute low on this process).


By having the logs flush on the slave and the master every 2 hours as  
part of this process we have small chunks of binary log we can apply  
to an overnight backup if we miss the replication stop before the  
disaster hits this server... making recovery to a fairly recent point  
in time simple - we could then spend some time munging through the  
relevant binary log to eliminate the corrupting event before applying  
the rest of them, while our services is back online.



3) If I made it robust and flexible would people be interested in it?



Because our production servers are a MASTER-MASTER pair we are kind  
of OK with the method we use in controlling the IO_THREAD in this  
way, but I do acknowledge there is an attraction to having the relay  
log (flushed regularly) have the latest data thus ensuring the admin  
server has all the data in one form or another, even if it's not  
actually executed on the database. So MAYBE we would be interested.



4) Is there a better way?

Yes - I still firmly believe the BEST solution here should come from  
MySQL... give us a replication-delay type variable that allows us to  
set an implementation delay on replication, the relay logs are up to  
date and the queries are executed by the SQL_THREAD after   
seconds from the timestamp in the original binary log - no need to  
change the log formats, fairly simple piece of code to add at MySQL's  
end... clearly the default value would be 0 so it only comes into  
play when someone wants it... I'd much rather set the value to 30  
minutes or an hour or something and no exactly how far behind my  
backup server is, instead of the sliding 2 hour range I have now...


Outside of a MySQL based solution, our system works great, it's 

Re: parsing show commands (for monitoring/logging)

2005-07-12 Thread Bruce Dembecki


does MySQL have a pretty way to persist snapshots of various show  
commands?


for example in Oracle you could just do:

insert into sysstat_log select sysdate, * from v$sysstat;
--(an ugly, overly simple example)

can the show commands be called/parsed with DBD/DBI?

so far the only way I've come up with to do this is by hacking up
tee(s) with cut, sed, etc. in bash which works fine but is rather
repulsive to look at.

in particular I'd like to build a cgi that generates a trended report
of show master status and show slave status to quantify the proagation
delay (or lack thereof) to mgmt.

any light shed would be greatly appreciated!



We do a status check using a script which runs this line, you could  
modify it to suit and direct the output where you want it of course:


/usr/local/mysql/bin/mysql -e SHOW SLAVE STATUS\G  |  grep  
Slave_SQL_Running


This of course returns:

Slave_SQL_Running: Yes

In mysql 4.1 there is a better variable you could grep your show  
slave status for:


/usr/local/mysql/bin/mysql -e SHOW SLAVE STATUS\G  |  grep  
Seconds_Behind_Master


which hopefully returns:

  Seconds_Behind_Master: 0

You can build a script that uses this type of command and appends the  
output to some log file.. then you can generate your trend file  
against that using whatever your preferred method of analysis happens  
to be... You can combine you can add data from SHOW SLAVE STATUS\G,  
SHOW MASTER STATUS\G and so on in your script... and then manipulate  
the output to your liking:


This script would execute the command and output a single line with a  
timestamp and the output of the SHOW statement you specifically  
specify in the grep - you can execute the script and pipe the output  
onto the end of some sort of log file, and away you go... you can  
write two or three of these scripts to collect all the lines you want  
and put them in different log files, or put three lines in this  
script and put them all in one log file... Cron the scripts to run  
every nn minutes and you're set. Munging the data then is the easy  
part (and not my dept):


#!/bin/sh
#

echo `date` `/usr/local/mysql/bin/mysql -e SHOW SLAVE STATUS\G  |   
grep Seconds_Behind_Master `


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql user name length

2005-07-11 Thread Bruce Dembecki
Just to make things REALLY messy... try setting the default character  
set of a 4.1 server to utf8, and then importing your data from 4.0...  
your mysql usernames are in real trouble now, because utf8 considers  
itself to be multi byte and takes more space, cutting down on the 16  
characters by.. well... let's just say it's painful. Took me some  
time to figure out why it wasn't accepting my new usernames - very  
unpleasant.


Best Regards, Bruce

On Wed, Jul 06, 2005 at 03:46:02PM -0700, Tim Traver wrote:
 Is there any reason why I shouldn't increase the size of the  
allowable

 user names in mysql to var(32) instead of the default var(16) ???

 Couldn't really find much on it, but wanted to ask if anyone knows of
 any troubles this may cause...

Yes, there are a number of places within the server that only expect the
username to be 16 characters, and will almost certainly break in the
face of longer usernames.

Jim Winstead
MySQL Inc.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



5.0.7 Upgrade (from 4.1) on OS X doesn't recognize datadir

2005-06-17 Thread Bruce Dembecki

So I am attempting a 5.0 upgrade from 4.1 on one of our OS X servers...

When attempting to launch mysqld it quits, with this error (showing  
two from the log files, happens with our build or the MySQL binary):



050617 14:03:46  mysqld started
/usr/local/mysql-standard-5.0.7-beta-osx10.3-powerpc/bin/mysqld_safe:  
line 2: --datadir=/mysqldata2: No such file or directory

050617 14:03:46  mysqld ended

050617 14:07:58  mysqld started
/usr/local/mysql-lw64bit-5.0.7-apple-darwin8.1-powerpc/bin/ 
mysqld_safe64: line 2: --datadir=/mysqldata2: No such file or directory

050617 14:07:58  mysqld ended

Needless to say /mysqldata2 is present and accounted for, and has the  
correct permissions for mysql to be able to read/write data... It is  
a symlink to another volume, but if I substitute the true path to the  
volume, I get the same error...


I had problems with one version of 4.1 having problems figuring out  
where to write the log files and it turned out to be an absolute file  
name issue... eg it treated /logs/binlogs as being relative to the  
data directory, and not an absolute directory... I resolved that by  
changing the log file setting to read ../logs/binlogs and it worked  
fine. Assuming there was a similar problem here I have tried various  
levels of ../../mysqldata2 to make sure I am escaping from whatever  
directory it is starting me in, and have failed to get there, after  
switching up 7 levels, far more than would be needed to get back to  
root from anywhere in /usr/local/mysql-any-version


Any idea how I can get MySQL 5.0 to launch here would be greatly  
appreciated :-)


Best Regards, Bruce

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Compiling for 64 Bit on OS X 10.4

2005-05-03 Thread Bruce Dembecki
Hi! Can anyone help me with a 64 Bit OS X 10.4 binary? I've tried to compile
it myself but get errors in the make process that I have no idea what to
do with.

Apple ships a MySQL 4.1.10a binary with Tiger, but it's not 64 bit. MySQL
doesn't have a 64 Bit OS X 10.4 binary yet.

Maybe my question should be... When can we have a MySQL binary compiled for
OS X 10.4 with 64 bit?

Anyone with ideas on how I can compile one myself, would greatly appreciate
it... Right now I run this (which will surely be munged by email clients
adding line breaks - assume it's all one line):

CC=gcc CFLAGS=-O3 -fno-omit-frame-pointer CXX=gcc CXXFLAGS=-O3 -fast
-fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti
./configure --prefix=/usr/local/mysql --with-extra-charsets=complex
--enable-thread-safe-client --enable-local-infile --disable-shared
--without-isam --without-docs --without-debug --with-raid  --without-bench
--with-mysql=/usr/local/mysql

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: OS X and MySQL table corruption...

2005-04-14 Thread Bruce Dembecki
We have a large OS X MySQL deployment on multiple servers and we have
experienced a range of weirdness with table corruption that I was never able
to fully determine the cause for.

Moving to G5 Xserves (from G5 Towers and G4 Xserves) has seen all the
problems go away as if a switch were thrown. I don't have an explanation or
even a root cause, but I also don't have a problem any more.

The thing I see in this thread that several people are talking about is the
differences and conflicts caused with OS X Server's MySQL install and the
MySQL AB install...

There were some issues starting with 4.0.17 that weren't fixed until 4.0.19
that could affect Mac users under certain circumstances. Apple was last I
looked still deploying 4.0.18, I don't know what's current with 10.3.8, and
I expect a significant version change from Apple when Tiger comes out at the
end of the month.

Our way of dealing with this is firstly to use MySQL AB binaries. Secondly
we make sure the path includes /usr/local/mysql/bin/. Next we edit the
Startup script to make sure it is launching mysqld_safe from
/usr/local/mysql/bin and not from /usr/bin and finally, and this is the most
important one... As root you:

cd /usr/bin/
rm my*
ln -s /usr/local/mysql/bin/* .

This is not only replacing the Apple Binaries with the MySQL binaries, it is
protecting you as you upgrade MySQL versions... Each time you upgrade MySQL
versions the symlink to /usr/local/mysql/bin/ will always point to the
current version of MySQL you are using.

The only thing you need to watch is from time to time Apple will update
their MySQL installs (snuck into a System Update), which will overwrite your
symlinks with new Apple Binaries (which is why the real fix it so make sure
you have the right path settings and replace the startup scripts, even if
Apple overwrites your symlinks, you'll still be calling your binaries).
Still, you should get in the habit of checking the files in /usr/bin/my*
each time you install a System update in case Apple has updated versions.

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld_multi at startup on Mac OS X not working

2005-04-14 Thread Bruce Dembecki
 I have successfully configured mysqld_multi to have mysql 4.1.11 and
 5.0.3 beta running on the same machine:
 
I would like to see how you configured mysqld_multi to do that, if you could
send me the information off list I'd appreciate it.

 # mysqld_multi start 
 # exit
 % mysqld_multi report
 Reporting MySQL servers
 MySQL server from group: mysqld4 is running
 MySQL server from group: mysqld5 is running
 %
 
 However, I can't get this to work at system startup time. Starting up
 a single mysql server works fine, with the following
 /Library/StartupItems/MySQL/MySQL script:
 
 #!/bin/sh
 
 . /etc/rc.common
 
 if [ ${MYSQL:=-YES-} = -YES- ]; then
 
   ConsoleMessage Starting MySQL database server
   /usr/local/mysql/bin/mysqld_safe 
 fi
 
 But if I change  /usr/local/mysql/bin/mysqld_safe  to 
 /usr/local/mysql/bin/mysqld_multi start , no servers start up. There
 are also no error messages in the .err logs: the last item there is
 the previous 'normal shutdown'.
 
 Any ideas? I would think that there should be no difference between
 executing mysqld_multi from a root shell and executing it at startup
 time, but apparently it's not the same.
 
You need to be careful... There isn't a difference between running
mysqld_multi at the command line and running it inside a script - remember
what you are running at startup isn't mysqld_multi but rather this command:

/System/Library/StartupItems/MySQL/MySQL start

That script then calls mysqld_multi, or not, depending on some variables in
the script...

What happens when you run

/System/Library/StartupItems/MySQL/MySQL start

At the command prompt... I venture a guess that the results are still no
mysqls start.

Let me share my startup script with you...

#!/bin/sh

. /etc/rc.common

StartService () 
{
if [ ${MYSQL=-NO-} = -YES- ]; then
ConsoleMessage Starting MySQL
/usr/local/mysql/bin/mysqld_multi start
fi
}

StopService ()
{
/usr/bin/mysqladmin ping /dev/null 21
if [ $? -eq 0 ]; then
ConsoleMessage Stopping MySQL
/usr/local/mysql/bin/mysqladmin --socket=/tmp/mysql.sock shutdown
/usr/local/mysql/bin/mysqladmin --socket=/tmp/mysql2.sock shutdown
else
ConsoleMessage MySQL is not running
fi
}

RestartService ()
{
StopService
StartService
}

RunService $1

There are some minor differences in how mine (which is working) and yours
seem to be configured... Let's look at those... What version of OS X are you
working on? Mine is running on 10.3.8, has been running on the previous
versions of 10.3 also. The major difference I see is the test on if to start
or not... This will be important. You have:

 if [ ${MYSQL:=-YES-} = -YES- ]; then

While I have:

 if [ ${MYSQL=-NO-} = -YES- ]; then

I don't know why yours is different, I know that mine works, it is Apple's
script and test, I just changed the binary it executes.

The other factor here is /etc/hostconfig - it must have a line that looks
like this:

MYSQL=-YES-

If YES is actually NO or if the line is not present at all, the startup
script will not execute the script.

Actually /etc/hostconfig is what the Startup scripts use to tell it what to
start or not, if you want to bounce your server and not have mysql start
when it reboots you can edit /etc/hostconfig and set the YES to a NO for the
MYSQL=-YES- line, just be sure to change it back when you are done.

When all is said and done you don't need to restart the whole machine to see
if your script is working.. You can simply run:

/System/Library/StartupItems/MySQL/MySQL start

And you will find out if you are working.

I also have some changes in the shutdown part of the script, because I use
mysqld_multi to start it, the original use of mysqladmin to shutdown the
single instance isn't going to shutdown both instances... So I add a line
for each instance to call mysqladmin shutdown and point it to each socket
file that is configured in the my.cnf file for each instance.

Hope that helps.

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication issue: I/O thread dies immediately after START SLAVE with no logged errors

2005-02-11 Thread Bruce Dembecki
Looking at your my.cnf files I don't see where you've told the slave what
server to connect to. The slave needs to know what server is the master.
This is usually accomplished by including a couple of lines in my.cnf.

If the file master.info is in the data directory it will override the my.cnf
settings because it contains more information.

So you either need to add lines like:

master-host = hostname
master-user = username
master-password = password

to my.cnf on the slave or add an appropriately formatted and constructed
master.info file to the data directory. If the master.info file exists and
is blank or doesn't include enough information, delete it. If it's there and
looks right, include it's contents in your next mail here (you can blank out
the username/password info).

Best Regards, Bruce

Tierney Thurban [EMAIL PROTECTED] wrote:
 Hi all.  Sorry if you get this twice -- it was posted to
 mysql-replication earlier, but it doesn't look like that list is
 really used.
 
 I'm having a problem with my replication setup.  This is my first time
 setting up replication, so this may be a simple problem.  I'm using
 one master and one slave, both running debian-testing, and they both
 have brand new 4.1.9 mysql installs (via apt-get).
 
 The problem is that each time I do a START SLAVE, the I/O thread dies
 almost immediately.  I can see it running only if I do START SLAVE;
 SHOW SLAVE STATUS\G on a single line.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



no subject

2005-02-11 Thread Bruce Dembecki
Hannes Rohde [EMAIL PROTECTED] wrote:
 
 innodb_data_file_path=ibdata1:2G:autoextend
 
 innodb_buffer_pool_size=1200M
 innodb_additional_mem_pool_size=20M
 
 
May not solve the replication issue, but if this is a 4GByte server that is
dedicated to MySQL (ie you aren't using memory for anything else, like..
say... a web server or something) and the MySQL server is dedicated to
InnoDB which the other memory settings seem to support - then you need to
revise this.

If the system is running a 64 bit OS you should be running a 64 bit binary
and you should set the InnoDB Buffer pool to closer to 3200M... Additional
memory of something like 256M or maybe even 512M would work well...

If the system is running a 32 bit operating system (with a PIV it probably
is 32 bit) I've found that the best mix for us has been at 1850M/256M - that
won't break the memory limits of a 32 bit OS... You may need to adjust
slightly depending on your OS. I you're running a 32 bit OS having gobs and
gobs of ram isn't going to help a lot because you can't give InnoDB more
than 2Gbytes.

Of course if it is used for something other than MySQL then you clearly need
to keep some memory available for that too.

On our setup we don't use autoextend for the InnoDB data files, we make a
whole lot of 2000M data files (like 40 of them) - some Operating systems
don't deal well with large files - if your single InnoDB data file is a
little on the large side, then maybe (small chance) the issue is there...
Again that would probably affect more than just replication so probably
isn't the cause.

One of the things that affects replication is the network link between the
two servers, are they both connected at high speed with similar duplex
settings... Shouldn't be an issue as Replication isn't that hard on the
network resources, but if you were running one server at 100Mbit/Full Duplex
and the switch was running at a 100/half or something, weird things could be
happening. We've even seen setups where one side was set to Autonegotiate
and other side was set to 100/Full causing problems, because they end up at
10/Half on one side and 100/Full on the other, which gives pretty scary
network performance.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re : Slow Replication

2005-02-11 Thread Bruce Dembecki
Hannes Rohde [EMAIL PROTECTED] wrote:
 
 innodb_data_file_path=ibdata1:2G:autoextend
 
 innodb_buffer_pool_size=1200M
 innodb_additional_mem_pool_size=20M
 
 
May not solve the replication issue, but if this is a 4GByte server that is
dedicated to MySQL (ie you aren't using memory for anything else, like..
say... a web server or something) and the MySQL server is dedicated to
InnoDB which the other memory settings seem to support - then you need to
revise this.

If the system is running a 64 bit OS you should be running a 64 bit binary
and you should set the InnoDB Buffer pool to closer to 3200M... Additional
memory of something like 256M or maybe even 512M would work well...

If the system is running a 32 bit operating system (with a PIV it probably
is 32 bit) I've found that the best mix for us has been at 1850M/256M - that
won't break the memory limits of a 32 bit OS... You may need to adjust
slightly depending on your OS. I you're running a 32 bit OS having gobs and
gobs of ram isn't going to help a lot because you can't give InnoDB more
than 2Gbytes.

Of course if it is used for something other than MySQL then you clearly need
to keep some memory available for that too.

On our setup we don't use autoextend for the InnoDB data files, we make a
whole lot of 2000M data files (like 40 of them) - some Operating systems
don't deal well with large files - if your single InnoDB data file is a
little on the large side, then maybe (small chance) the issue is there...
Again that would probably affect more than just replication so probably
isn't the cause.

One of the things that affects replication is the network link between the
two servers, are they both connected at high speed with similar duplex
settings... Shouldn't be an issue as Replication isn't that hard on the
network resources, but if you were running one server at 100Mbit/Full Duplex
and the switch was running at a 100/half or something, weird things could be
happening. We've even seen setups where one side was set to Autonegotiate
and other side was set to 100/Full causing problems, because they end up at
10/Half on one side and 100/Full on the other, which gives pretty scary
network performance.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Converting Text columns from mysql 4.0 to 4.1

2005-02-08 Thread Bruce Dembecki
Hi! We have a problem converting our 4.0 text columns from a Hong Kong
database to 4.1. In order to get the conversions to work generally speaking
we build our databases with default character set utf8 - it means the German
products still work, and the English ones, and the Chinese ones, and the

Anyway, we ran into a problem on the Hong Kong platform where the text
column imports as a single space to 4.1... If I look at the data in 4.0 I
see actual text (I suppose, it's mostly jibberish on my screen), while in
4.1 all I have after the import is a single space character.

If I change the column type to blob (from text) I can get the data imported
without problem, except that the data is now in a blob column. If I try to
alter the table to a text column, I am left with the single spaces again.

Looking at the data that does get affected (not all records suffer this
fate, just some) it appears that they have multiple languages, for example
Chinese or more often Japanese, together with something like an email
address which is written in latin type characters. I can post a new entry
through the webapp with mixed languages, it's just the export/import that
seems to be be letting us down - or converting the blob to a text in 4.1
after the fact.

I even tried building a duplicate table format and doing an INSERT SELECT
where the source is a blob and the target is a text, and that also fails.

Clearly I can't convert the rest of my databases if there is a chance that
our message bodies will be munged With about 100 databases each with 60
tables it's not even going to be easy to try and script it in such a way
that I could do a dump and an import with something changing the table type
in the .sql file from text to blob, let alone the time it will take us to
first test the Application and web servers to see if making the change to a
blob column will affect us in any way.

Do I need to be doing all this work... Is there something I have done
incorrectly? Is this a bug that someone is fixing and will go away next
version?

I can provide the dump files if someone wants to test... Let me know.

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Character Sets, 4.0 and 4.1

2005-02-04 Thread Bruce Dembecki
So today for the second time in six weeks we are faced with rolling back to
mysql 4.0 because of dramas with character sets. I don't know about anyone
else but this supposedly wonderful feature has been nothing but a nightmare
for us.

So our Application servers use Unicode for our non US English products, and
they talk to MySQL through Connector J with a flag set to use Unicode in the
JDBC config.

First time around we just dumped the data and then imported it into the 4.1
instance. Everything looked good, but it wasn't. The German folks were
complaining their various umlauts and so on were missing, and there was
more. Of course we're told to just bring the data over to mysql 4.1 and
we'll have no problems, so we do that, and because we didn't specify a
character set for the import, we got latin1, and our German and Chinese
and... All broke.

So six weeks of trial and experimentation later and we try for another
update. This time in our create database statement when we begin to import
the database, we set the default character set to utf8 for everything. Now
after the import our Germans and Chinese folks still get the results they
expect.

A day later and we are getting complaints from Hong Kong that there are a
whole bunch of messages appearing on their discussions with no message body.
We look at the backend and right there in the database the messages are
sitting and the body consists of exactly one space. Whatever content was
sent to us, was turned into one space. We look at it and we see that there a
more than a few messages that got migrated from 4.0 to 4.1 and their message
bodies are also one space. Not all messages, just some. Not all messages
from any individual user, just some... The 4.0 version of the data has
content that consists of more than a single space... Can't quite tell what
it is, but there's content there in 4.0 that disappears in 4.1.

So I understand that having multiple character sets is a good thing, but to
be honest, I pretty much thought we had it in 4.0.. We told the JDBC to us
Unicode and away we went... Clearly someone was using something that wasn't
unicode (some of the comments suggest that there is some Japanese in the
missing messages, but I can't tell), and for whatever reason mysql 4.1
decided it should be repalced with a space character.

I'm probably missing the point of the character set support along the way
somewhere... But I need to know how to fix this (I understand that's
difficult when all I have left is one blank space and don't know how to
reproduce the problematic data). What did I miss in the simple open your
data files with 4.1 and it's good to go instructions... What character set
performs the same as MySQL 4.0, where it didn't care what character set you
gave it, it would accept it? Can we have a character set that will give us
this functionality?

And why are we taking input data on an import and by the looks of it an
insert, and turning it into a single space, can't we do something better
with the data?

4.0 worked for us with products in 20+ languages. It worked with no great
effort and no problems... Now we have the new enhanced version which
provides better support for international character sets, and we find
ourselves with lost data from the moment we import, and user posts
disappearing as they come in. What do we do to not have this problem?

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fixing the worst InnoDB corruption bug in 3 years - when

2005-01-06 Thread Bruce Dembecki
At the 2004 Users Conference in Orlando in April there were two sessions on
optimizing MySQL hosted by a MySQL staffer who's name eludes me for the
moment. He told the assembled masses that in benchmarks he ran that
innodb_file_per_table was somewhat faster than using the large innodb table
space. I didn't get the impression it was like 50% faster or anything, but
once finished optimizing indexes and so on any gains are likely to be in
small pieces, but they all add up.

I can see the logic of it of course... Most of our servers are running
40Gbyte InnoDB table spaces, two are running 100G space. Some of our tables
are small, some have only 7 rows of 2 columns... It must be easier for
InnoDB to find 100bytes of data in its own file rather than in 100GBytes of
shared table space.

So I don¹t have anything quantitative, just hearsay from the folks at MySQL
who are the performance and fine tuning experts.

As to it's being new... It's different. It's as new as MySQL 4.1 - if you
are using 4.1 then it's no newer than anything else there. MySQL staffers
were giving us benchmarks with it back in April at the Users Conference, and
I had already figured out I wanted to do it last January, I've just been
waiting for a) the production version, and b) an opportunity to down my
services. 

MySQL historically has released very stable products by the time they get to
Beta. We always wait till it goes Production because we couldn't explain
to a client why a problem occurred on beta software, but it's only labeling.
MySQL beta typically is more stable than most folks release.1 or release.2
versions. This is because of the very large base of people around the globe
using and testing MySQL and contributing to it's development.

Best Regards, Bruce

On 1/6/05 7:38 AM, Ken Menzel [EMAIL PROTECTED] wrote:

 Hi Bruce
 
 SNIP
 - Original Message -
 From: Bruce Dembecki [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Thursday, December 30, 2004 2:51 AM
 Subject: Re: Fixing the worst InnoDB corruption bug in 3 years -
 when
 
 As a side note with demonstrated performance increases when using
 innodb_file_per_table why aren't more people using it?
 
 Best Regards, Bruce
 /SNIP
 
 What demonstrated performance increases are you referring to?  I would
 love to use file_per_table, but as it is new we are very conservative
 with our production DB's and this feature is too new.  But if there
 are demonstrated performance increases we would love to start using
 it!
 
 Thanks,
 Ken
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql on OS X

2005-01-06 Thread Bruce Dembecki
Hi Scott! We use MySQL on 9 Mac OS X machines. While we are looking at
moving some of that back to big Sun boxes, that's a memory access/64 bit
issue, not (directly) a performance issue.

Looking at the live stats one of the machines has an uptime of 55 days and
has averaged 405.78 queries per second across the whole time (that¹s about
35M queries per day) - we run 8 in production, and one admin server which
replicates from everyone and does our backups and feeds the reports servers
and so on, not all 8 production machines are running this busy, but it gives
you an idea of our traffic and throughput.

In our experience the key here is configuration. Overall for our use
(discussion boards) we find InnoDB tables are dramatically faster for us
than MyISAM. The key thing will be setting your memory settings in my.cnf to
be as generous as possible. Query Cache is great for us, typically we see
about 30% of our SELECT queries going through query cache, so that¹s
definitely worth turning on.

Does the machine do anything else or is it just serving MySQL? How much
memory does it have?

my-huge.cnf in actual fact isn't that generous. Typically my-huge would take
about 500M of ram, maybe a gig if you have LOTS of connections set. Works OK
if you are running a machine that needs power for other things, such as
running Apache and PHP and MySQL all in one box... But if your mysql box is
just serving mysql and nothing else, you need to tune the machine as much as
possible for mysql. Turn off system processes and options not really needed
for running a database. And tune up that memory

Due to memory limits on the Mac OS X quasi 64 bit emulation it is possible
for the Operating System to access more than 2Gbytes of ram, but not
possible for any process to do so (including, sort of) mysqld. The trick
here is that innodb grabs it's own chunk of memory, so in actual fact we can
get nearly 4Gbytes of memory allocated to MySQL on OS X...  Settings of note
here (if you are into InnoDB) are:

key_buffer_size=1024M
sort_buffer_size=4M
read_buffer_size=4M
read_rnd_buffer_size=8M
myisam_sort_buffer_size=64M
query_cache_size=128M
innodb_buffer_pool_size=1850M
innodb_additional_mem_pool_size=256M
innodb_flush_log_at_trx_commit=0

This last one improved performance for us under InnoDB dramatically.
We have a lot of connections, so the read buffers and so on are multiplied
by the number of connections... If you are only using MyISAM and don't have
so many connections in max_connections you can probably increase
key_buffer_size to closer to 1500 or so...

The query cache you should increase and monitor (using SHOW STATUS LIKE
'Qcache_%';) and when your server has been running more than 48 hours and
Qcache_free_memory is still giving you a comfortable overhead you can be
happy with it... If there's not much left in Qcache_free_memory, increase it
and try again. Total memory available under OSX with no InnoDB is 2Gbytes...
So add up your key_buffer and the myisam_sort_buffer and the query_cache and
thesort/read buffers multiplied by the number of connections and you get to
where you can get to. Of course if your server has 2Gbytes or less you need
to reduce this somewhat to leave room fro the OS to run and stay within the
available memory of the machine.

We are trying now to determine if we can wait with OS X for their true 64
bit operating system due sometime in the first 6 months of 2005 (which we
have to assume is June) or if our new database server budget should go to
Sun boxes which give us real 64 bit now, and thus let us throw a whole lot
of memory at InnoDB, or if we stay with our Apple strategy and expand when
we get the new OS. Problem, here being Apple has a great Storage situation,
both loc al to the Xserver and using the Xserve Raid... Sun has a problem in
storage land right now.

Its not clear to us which way we should go. From a cost perspective once you
load up a machine with 8Gbytes of ram and multi processors and lots of disk
space and multi year onsite support contracts it doesn't make a lot of
difference if we go Sun, Apple, Dell, HP etc etc... They all come out within
a few $$ of each other. So for us it's 64 bit which is important, Sun leads
the way here, but Apple should get there very soon.

Anyway, it most certainly is possible to run mysqld under high load on OS X,
we do it all day, every day. The servers have 8Gbytes of ram but really
aren't using much more than 2 yet... (we have some memory settings for
MyISAM caches but our MyISAM tables are really only the archived data, so we
don't get much benefit from that. Once OS X 10.4 comes out and we can go 64
Bit, I expect we'll be a lot happier with our OS X G5s and can put enough
through them that we'll start to see CPU use become significant... It'll be
like getting two more servers for every server we already have.

Best Regards, Bruce


On 1/6/05 7:58 PM, Scott Wilson wrote:

 Hello,
 
 I'm interested to hear peoples' experiences running mysql on OS X.
 

Disk Block size

2005-01-04 Thread Bruce Dembecki
Hi! We're setting up a Solaris system which is behaving poorly, very slow
disk performance. The nice folks at Sun have suggested a mismatch between
the block size MySQL is writing to disk and the block size the the Operating
System is writing to disk.

While we can see the logic in the argument, I have over the years not been
able to find anywhere that this information was available or controlable, so
my assumption has been that mysql gives the data to the operating system to
write to disk and doesn't deal with block sizes.

So here's the questions...

In mysql 4.0.23 or 4.1.8 for Solaris Sparc 64 bit what is the block size
mysql uses for writes to disk, and is there a way to control that?

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fixing the worst InnoDB corruption bug in 3 years - when

2004-12-29 Thread Bruce Dembecki
Thanks Heikki, I understand the bug, and I know you fixed it, for which I am
very pleased, as always problems when identified are fixed quickly :-)

I guess the question I was trying to ask of MySQL is when will 4.1.9 be
built... It's very frustrating knowing that a problem has been fixed
(particularly a serious problem such as this) but having to sit on our hands
and wait perhaps two months for a MySQL Official Binary to be built.

Heikki you have definitely done your part in finding and fixing this thing
rapidly. I understand MySQL doesn't want to make new releases every week,
they need to set some guidance for their users that their releases will not
be made too frequently. On the other hand this isn't something that is a bit
annoying, or some queries don't work... This corrupts the database.

We can't grab a snapshot and compile our own version (well we could),
because if we do it won't be a MySQL Official Binary and if we have
problems with the database our clients wouldn't understand why we weren't
using a MySQL sanctioned version. On the other hand if we do use the
Official binary, our data will become corrupt.

The problem I am experiencing is not the delay in fixing the problem, but
the delay in releasing the fix. The two month between releases that seems
common at the moment isn't unreasonable in most cases, except where there's
a corrupting bug uncovered and fixed, right after a release - February is
too long to wait for this fix to be included in an official binary.

In my mind (as I am directly affected by this bug) this one is serious
enough to release a new build asap.

As a side note with demonstrated performance increases when using
innodb_file_per_table why aren't more people using it?

Best Regards, Bruce

On 12/29/04 10:22 PM, Heikki Tuuri wrote:

 Bruce,
 
 It is the bug innodb_file_per_table corrupts secondary indexes.
 
 I fixed it with several changesets on Sunday:
 
 http://lists.mysql.com/internals
 
 Thus, it is fixed in the current 4.1 bk tree.
 
 This is indeed the worst InnoDB corruption bug since the BLOB update bug of
 summer 2001. Fortunately, the  bug affects few users, because not too many
 are running with innodb_file_per_table.
 
 Regards,
 
 Heikki
 
 
 On 12/28/04 2:38 PM, Bruce Dembecki [EMAIL PROTECTED] wrote:

 In the MySQL Manual under InnoDB in the Using Per-Table Tablespace section
 it says clearly at the top:
 
 NOTE: CRITICAL BUG in 4.1 if you specify innodb_file_per_table in `my.cnf'!
 If you shut down mysqld, then records may disappear from the secondary
 indexes of  a table. See (Bug #7496) for more information and workarounds.
 
 Following the link to Bug 7496 (http://bugs.mysql.com/bug.php?id=7496) we are
 told two important things:
 
 1. This is the worst InnoDB corruption bug in 3 years.
 2. Will be fixed in 4.1.9.
 
 So thanks to Heikki for finding and fixing this.

 So now to the question...
 
 As a person in the process of migrating from 4.0 to 4.1 and having already
 scheduled the downtime with my clients for this Friday morning, and having to
 do a full dump and import already as part of the migration process I'd like
 to  know WHEN the fix will be available. I don¹t have a lot of opportunities
 for a full dump and import, so this is a crucial time for me, and there are
 some benefits with innodb_file_per_table that are important to us.
 
 If we go with history then we should expect a new version of the current
 MySQL products every 2 months approximately. Having just received 4.1.8 I'd
 not like to see MySQL leave InnoDB's worst corruption bug in three years sit
 for two months when a fix has already been written.
 
 Can we have a new build with this fix included please? When can we have it?
 The grab it from the nightly snapshots and compile it yourself answer won't

 cut it when we have to deploy into production and MySQL's company line is to
 only use MySQL official binaries in production.
 
 If not 4.1.9 can we call it 4.1.8b and get it shipped (there's already a
 4.1.8a).

 Best regards, Bruce


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Monitoring replication in mysql

2004-12-28 Thread Bruce Dembecki
We use a monitoring system that does TCP based checks on our various systems
and can alerts us based on criteria we define. So we right shell scripts
that run locally and return certain values and tie those scripts to specific
TCP ports using /etc/inetd.conf and /etc/services - This is the script we
use to monitor replication on every machine (it's much shorter without my
excessive comments):

#!/bin/sh
#
#
# Bruce's MySQL Replication Verification Script
#

/usr/local/mysql/bin/mysql -e show status like 'Slave_running';


This script is then tied to a port, so any web browser or our monitoring
system hits http://mysqlserver: (or whatever port you decide on) should
get this:

Variable_nameValue
Slave_runningON

From there our monitor takes that data and looks for the keyword ON, if
it's there it's happy, if it matches the keyword OFF it sends an alert
page and marks the instance as in warning state, any response that doesn't
include ON or OFF generates a service down state and also sends pages
etc... (If MySQL is running then the slave status will either be ON or
OFF... If mysql isn't running the mysql client returns it's own error saying
it's unable to connect).

Best Regards, Bruce

On 12/28/04 1:44 PM, Bruce Dembecki [EMAIL PROTECTED] wrote:

 Tucker, Gabriel wrote:
 Anil
 
 Write a script that does a slave status and check if either of the threads
 are
 running.  You could
 further check for error numbers and descriptions.  This is what we do.
 
 Gabe
 -Original Message-
 From: Anil Doppalapudi [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, December 21, 2004 6:05 AM
 To: [EMAIL PROTECTED]
 Subject: Monitoring replication in mysql
 
 
 
 Hi,
 
 we have no of mysql replication setups in our setup. how to monitor those
 replication setups.my aim is if any slave goes down
 my script should immediately send an alert mail to me. if anybody having
 already developed scripts please let me know otherwise just give me an idea
 what to monitor in in replication setup.
 
 Thanks
 Anil
 DBA
 
 We have a script that monitors output from SHOW SLAVE STATUS, but
 actually had one time when replication died, but output from above
 command looked perfectly fine.  It was due to massive table corruption,
 which was in turn due to filesystem corruption.  Now, we have the same
 test running, but we also have a backup monitor which inserts a value in
 the master and tries to read it from all replicants.  We allow an
 acceptable delay (5-10 minutes) before we page all admins with this
 backup test.
 
 Greg
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fixing the worst InnoDB corruption bug in 3 years - when

2004-12-28 Thread Bruce Dembecki
In the MySQL Manual under InnoDB in the Using Per-Table Tablespace section
it says clearly at the top:

NOTE: CRITICAL BUG in 4.1 if you specify innodb_file_per_table in `my.cnf'!
If you shut down mysqld, then records may disappear from the secondary
indexes of a table. See (Bug #7496) for more information and workarounds.

Following the link to Bug 7496 (http://bugs.mysql.com/bug.php?id=7496) we
are told two important things:

1. This is the worst InnoDB corruption bug in 3 years.
2. Will be fixed in 4.1.9.

So thanks to Heikki for finding and fixing this.

So now to the question...

As a person in the process of migrating from 4.0 to 4.1 and having already
scheduled the downtime with my clients for this Friday morning, and having
to do a full dump and import already as part of the migration process I'd
like to know WHEN the fix will be available. I don¹t have a lot of
opportunities for a full dump and import, so this is a crucial time for me,
and there are some benefits with innodb_file_per_table that are important to
us.

If we go with history then we should expect a new version of the current
MySQL products every 2 months approximately. Having just received 4.1.8 I'd
not like to see MySQL leave InnoDB's worst corruption bug in three years sit
for two months when a fix has already been written.

Can we have a new build with this fix included please? When can we have it?
The grab it from the nightly snapshots and compile it yourself answer
won't cut it when we have to deploy into production and MySQL's company line
is to only use MySQL official binaries in production.

If not 4.1.9 can we call it 4.1.8b and get it shipped (there's already a
4.1.8a).

Best regards, Bruce


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mac OSX Tiger and 64 Bit

2004-06-29 Thread Bruce Dembecki
So I have a question for those who understand developer speak and MySQL
builds and so on...

Apple announced their new OS earlier this week, including this information
on the improvements to 64 Bit version using the G5 processor:

http://www.apple.com/macosx/tiger/64bit.html

One of our biggest problems to date on our G5 servers is despite the bulk
ram we have installed, the current Apple OS isn't really 64 Bit so we can't
give the InnoDB caches more than 2Gb of ram, and thus there are always no
empty pages.

This statement from Apple stops short of saying the OS was fully 64 bit...
But I think they are saying that apps such as mysqld will be able to call
larger chunks of memory, which is what we want.

Between MySQL's strong Apple ties and the build engineers working on MySQL
binaries and the knowledgeable members of this list can anyone interpret
this statement from Apple and tell us if we will be able to increase the
InnoDB cache settings to take advantage of the memory in the systems?

As our application uses many different databases and any application server
only ever speaks to one database I am seriously considering running multiple
instances of MySQL on a single machine with different databases - but it's
aheadache to administer... I'd rather use my 65 bit hardware and MySQL's 64
bit builds and use the memory in the machine in a single instance...
Comments welcome.

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqlbindump feature request...

2004-06-10 Thread Bruce Dembecki
I'm not sure of the right way to submit these things, so I'll do it here...

I want to dump some data form the binlogs and process it back into the
servers. However I just want to process the data from one specific server.
In mysqlbindump I can optionally specifiy a specific database for the
information to dump... I want to specify a server id...

Eg I want to dump from binary-log.23 all the information that has a server
id value of 12.

I was thinking something like:

mysqlbinlogdump --serverid=12 binary-log.23

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



lower_case_table_names, Mac OSX, InnoDB and 4.0.18

2004-02-27 Thread Bruce Dembecki
There seems to be some issues we are experiencing with this new
lower_case_table_names variable introduced in 4.0.17 and modified in 4.0.18.
So much so that I can't upgrade to 4.0.18 at all.

Here is the startup log from a 4.0.18 mysqld:

040227 02:00:22  mysqld started
040227  2:00:22  Warning: Setting lower_case_table_names=2 because file
system for /mysqldata/ is case insensitive
InnoDB: Error: tablespace size stored in header is 2438400 pages, but
InnoDB: the sum of data file sizes is 256 pages
040227  2:00:23  InnoDB: Started
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.18-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040227  2:00:23  InnoDB error:
Cannot find table jive_ebay_us/jiveuser from the internal data dictionary of
InnoDB though the .frm file for the table exists. Maybe you have deleted and
recreated InnoDB data files but have forgotten to delete the corresponding
.frm files of InnoDB tables, or you have moved .frm files to another
database? Look from section 15.1 of http://www.innodb.com/ibman.html how you
can resolve the problem.
ERROR: 1016  Can't open file: 'jiveuser.InnoDB'. (errno: 1)
040227  2:00:23  Slave: Error 'Can't open file: 'jiveuser.InnoDB'.
(errno:1)' on query 'UPDATE jiveUser SET some user data WHERE some
qualifer'. Default database: 'jive_ebay_us', Error_code: 1016
040227  2:00:23  Error running query, slave SQL thread aborted. Fix the
problem, and restart the slave SQL thread with SLAVE START. We stopped at
log 'binary-log.008' position 781142831

So, let's walk through these... For starters mysqld decides on it's own it
will set lower_case_table_names=2 on it's own because file system for
/mysqldata/ is case insensitive... Yes, that's true, the file system is
case insensitive, it's Mac OSX 10.3. However because we have had issues with
this in the 4.0.17 introduction of this feature we have a statement in our
my.cnf file which specifically says set-variable=
lower_case_table_names=0. Despite us manually telling MySQL that we want it
to be case sensitive in all cases it ignores our setting and chooses it's
own.

Now that it has ignored our request to be case sensitive all of the time for
all of the requests we are experiencing problems with InnoDB not recognizing
we have tables that we have. Specifically in the error cited above we have a
table called jiveUser, that is evident in the way the request is made to to
the database to set jiveUser... Yet InnoDB is failing because it can't
find jiveuser. So it seems that MySQL has behaved as promised with a
lower_case_table_names=2 value and converted the request to lower case...
InnoDB however can't find a lower case version of this table. Of course all
our table names have upper and lower case characters and this particular
entry happened to score the jackpot because it was the first query too come
along after we booted 4.0.18. It happens on all our databases, on all our
tables.

I don't know what we are doing wrong here... Any value of
lower_case_table_names results in 4.0.18 not running (OK, it runs, but our
applications don't run with it), so we can not upgrade to 4.0.18. MySQL
seems to be ignoring our my.cnf startup value for lower_case_table_names and
InnoDB seems to be unable to find our tables if MySQL changes the case to
lower case.

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



OSX 10.3 Binaries and 64 Bit

2004-02-25 Thread Bruce Dembecki
Hi! One of my associates here read a report somewhere that mysqld when
compiled under OS X 10.3 was 40%+ more efficient due to improvements in the
compilers and the way 10.3 work. Also 10.3 is a 64 Bit Operating System and
it would be a major benefit to us to set some of the memory values in excess
of 2MBytes on our 64 Bit G5 servers. Sadly my associate can't find the
original report any longer, so I don't have the reference material or exact
information.

MySQL themselves repeatedly says we are better off using MySQL compiled
binaries than compiling our own. So then how can we take advantage of these
OSX 10.3 compiler gains when MySQL is only offering 10.2 compiled versions
on their web site. 

Which brings me to my questions...

If the OSX 10.3 Compilers are so much more efficient and result in major
performance gains how long will it be before MySQL starts providing a MySQL
binary compiled for OSX 10.3?

Along with regular 10.3 is it possible MySQL will offer Binaries for 10.3 64
bit for those people with G5 servers (and with Apple being a major sponsor
of the forthcoming Users conference and making a big deal about the new G5
servers I would think they have a major interest in this issue too).

In the absence of these apparently faster improved versions of MySQL if we
did want to make our own binaries using 10.3 can some talk us Mac guys (who
haven't had the long history of building our own binaries most of the Unix
guys have) through the exact process of getting the source code and
compiling it on our systems?

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Master refusing Replication connections

2004-02-21 Thread Bruce Dembecki
Help, I seem to be running into a problem with replication which up until
now has served us well.

We run mysql servers in pairs, with each server in a pair mastering off the
other. So for example mysql1 masters off mysql2, which masters off mysql1.

Friday morning one server stopped accepting connections for replication,
let's call it mysql2. It's partner, called mysql1, has some log entries
which seem on the face of it self explanatory...

040221 15:35:40  Slave I/O thread: error connecting to master
'[EMAIL PROTECTED]:3306': Error: 'Access denied for user:
'[EMAIL PROTECTED]' (Using password: YES)'  errno: 1045  retry-time:
60  retries: 86400

Now even I can read that it doesn't like the username/password combination.
Let's just settle for the fact that I can change the username/password
combination to any set of usernames/passwords that are valid for replication
connections and I still get the same error. In cases where that
username/password are enabled to logon for other purposes, say my personal
sysadmin account which has all access enabled, I can log in fine using the
mysql client, but when trying that username/password in replication I get
the same error as above.

So one of my server pairs has fallen way behind in what is current data... I
have moved all my application servers to point to the fully up to date
server. I can't point them to mysql1 and bounce mysql2, the data is a day
and a half out of date.

So far I've tried everything I know to get replication working short of
bouncing mysql2... the seemingly problematic master. Clearly bouncing the
only server we have will cause me some heartache with the application
servers and I will have to co-ordinate the timing to such an action with all
our clients... Our next scheduled maintenance window isn't till Friday
morning, I don't want to run on one server for a week here.

Anyone got any ideas or suggestions on how I can resolve this issue? Will
bouncing even help me?

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Memory Problems on a G5/OSX/MySql4.0.17

2004-01-30 Thread Bruce Dembecki
 On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful stuff:
 
 So.. My tips for you:
 
 1) Consider a switch to InnoDB, the performance hit was dramatic, and it's
 about SO much more than transactions (which we still don't do)!
 
 Consider it switched! as soon as I find the way to do so :)
 Are there any changes necessary to his code/queries to use innodb?
 
No changes needed to code/queries, except probably the daily table
optimize/repair can go away... As far as how to do it...

It seems you can (or are forced to) pretty much have daily down time... All
you need is some time to bring the machine down for a quick bounce... First
edit the my.cnf file to add the InnoDB settings... You have a setting in
your current my.cnf disabling InnoDB, that needs to go away, my InnoDB
settings are below, you probably don't need a 40Gig disk space for InnoDB,
but you should take 4 x data size at least (InnoDB keeps data times 2 so it
can roll back transactions, and then there are index etc...). So Decide how
big you want your space to be and make the appropriate changes in my.cnf to
make it happen... Whatever you go with you are stuck with as a minimmum,
it's hard to go smaller once it is live.

After you bounce MySQL and InnoDB is then an option you will still need some
memory for MyISAM until you change the tables... So maybe set the
sort/read/join buffers to 2M if they aren't already, Key Buffer to say 250M
and the InnoDB numbers close to mine. After you have moved the tables just
drop the Key Buffer down to 16M or so. Oh yes, and as I will say below, drop
that query cache, 64M is plenty for you I expect, the rest is being wasted.

Once this is all setup go ahead and bounce the server so the new settings
take place.

So now you have InnoDB available... All you need now it to change the
data... The command is:

ALTER TABLE xxx TYPE=InnoDB;

Once again, do NOT change the mysql database, it MUST stay as MyISAM, and
doesn't affect your performance at any rate.

Once this is done you can go ahead and drop the Key Buffer right down to
something tiny, 16M or so is what we have, and bounce MySQL again and you
are all set.

 2) Drop the query cache to something more practical, a gigabyte is fine if
 your data is static, if it's not it's way too much. We use 128MBytes and
 typically have about a 30% hit rate on the Query cache and the busiest
 server is showing 80MBytes unused memory in the query cache and a 41%
 hit rate, and our databases take about 40G of disk space. Remember having
 a big query cache doesn't help if it's mostly sitting unused (in fact if
 ours are still sitting with 80M free in a week I'll drop all of them
 64MBytes). 
 
 we have an average of ~15-20%, with times sustaining 30+%
 
Errmm... The stats you included says that the Query Cache is WAY out of
control. There's like a gigabyte of unused space cache there.

 3) Give lots of memory to InnoDB, I'll share my settings below.
 
 Thank You! 
 
 4) Take most of the non InnoDB memory settings and drop them down real low,
 InnoDB does well on it's own and if you convert all tables you don't need to
 leave much in the way of resources for MyISAM.
 
 ok 
 
 5) Turn on and use the slow query log (and if need be change the time
 needed to qualify as a slow query, the default 10 seconds is a lifetime). You
 may not code the queries yourself, but you can identify the queries that
 are causing problems and from there you can advise the client on changes
 to the database structure (indexes etc) or at least tell him exactly what the
 problem queries are.
 
 The slow log has helped us a lot in the past... with the current slow
 log settings, only about 0.1% are slow queries.  3K out of 4million in the
 past 18hours. 
 Currently the time appears to be set at 2 (From show variables:
 slow_launch_time   2 ).
 
 6) Go get MyTOP from Jeremy Zawodny at
 http://jeremy.zawodny.com/mysql/mytop/ - personally I like version 1.3
 but that may just be what I am used to... You may not be able to control
 the coding part but you can at least monitor the server and see what it's
 up to and quickly and easily see problems.
 
 Great tool.. only recently started using it.
 
 7) If you decide to stay with MyISAM and not InnoDB then you will want
 as much memory as you can in the Key Buffer while leaving some space in
 the sort/read/join buffers.. I'd up the sort/read/join buffers to maybe
 10MBytes, or even 20Mbytes, if you need to drop Key buffer to 1500M to
 give you the space for the others. We got OKish results on MyISAM with the
 larger sort/read/join buffers - InnoDB made all the difference though.
 
 I've only gone as high as 6M on those before.
 
 Before giving you our settings I do want to point out one thing... We
 haven't fine tuned the memory settings since we did the G5 switch. At
 the time I was bringing the machines up they needed to be up quickly, and
 when it didn't work correctly with my original settings I had to make
 adjustments to get it to run at all

Re: Memory Problems on a G5/OSX/MySql4.0.17

2004-01-28 Thread Bruce Dembecki
 I don't think there would be any benefit to using InnoDB, at least not
 from a transaction point of view

For the longest time I was reading the books and listening to the experts
and all I was hearing is InnoDB is great because it handles transactions.
Having little interest in transactions per se I pretty much started tuning
things out whenever people mentioned InnoDB.

One day when talking to some MySQL AB folks they asked why I wasn't using
InnoDB... I kind of looked at them blankly and replied that I don't need
transactions, and they looked back as if I was mad.

Turns out InnoDB is far better at handling large databases than MyISAM, we
had a massive (and I do mean massive) increase in performance just by
switching to InnoDB. Uses a little more disk space, but it's worth it, and
with a 5GByte database and a G5 server you have room to spare, even if you
only got the smaller disks.

InnoDB is a major thing for us now, everything is InnoDB. If an Engineer
complains something they have done is running slowly it usually turns out to
be they made some new thing and didn't make the table InnoDB. The fix is
easy and quick. I also suspect that you could do away with that nightly
table repair that ties up the machine for hours at a time if you were using
InnoDB.

We have 4 G5 towers serving MySQL for us, all DP2GHz machines with 4GBytes
of RAM. If your data is changing rapidly, as it appears from your samples
most pages include some sort of insert, you will have limited benefit from
the Query cache - every time a table receives any type of change to it's
data any queries in the query cache that use that table are dumped. In
February we are adding to the mix with 2 G5 XServes... These are for new
projects, the current servers are handling their loads fine.

On the Disk side we got the dual 250GBytes and mirrored them for redundancy,
speed isn't an issue as far as we can tell.

We chose to replace our old database servers with G5s. The old machines were
quad processor Sun boxes, and one was an 8 CPU machine. The G5s left them
all for dead in terms of performance, although I'd prefer a couple of extra
processors, something inside me still feels better knowing that when a
process goes AWOL it's not holding up 50% of the server's resources. The
Application servers are still typically Sun, although new ones won't be.

We average about 140 Queries per second per machine (of course the load
isn't that well distributed... but it gives you an idea), and typical high
points are about 400 - 500 qps on any given machine without stressing the
machines (replication catch up can see 1500 - 2000 queries per second, but
that's not so common and of course is mostly inserts).

Before we did the upgrade to 4.0.17 during last Friday's maintenance window
we were over 1.5 billion queries total for the 28 days the machines had been
up.

So.. My tips for you:

1) Consider a switch to InnoDB, the performance hit was dramatic, and it's
about SO much more than transactions (which we still don't do)!

2) Drop the query cache to something more practical, a gigabyte is fine if
your data is static, if it's not it's way too much. We use 128MBytes and
typically have about a 30% hit rate on the Query cache and the busiest
server is showing 80MBytes unused memory in the query cache and a 41% hit
rate, and our databases take about 40G of disk space. Remember having a big
query cache doesn't help if it's mostly sitting unused (in fact if ours are
still sitting with 80M free in a week I'll drop all of them 64MBytes).

3) Give lots of memory to InnoDB, I'll share my settings below.

4) Take most of the non InnoDB memory settings and drop them down real low,
InnoDB does well on it's own and if you convert all tables you don't need to
leave much in the way of resources for MyISAM.

5) Turn on and use the slow query log (and if need be change the time needed
to qualify as a slow query, the default 10 seconds is a lifetime). You may
not code the queries yourself, but you can identify the queries that are
causing problems and from there you can advise the client on changes to the
database structure (indexes etc) or at least tell him exactly what the
problem queries are.

6) Go get MyTOP from Jeremy Zawodny at
http://jeremy.zawodny.com/mysql/mytop/ - personally I like version 1.3 but
that may just be what I am used to... You may not be able to control the
coding part but you can at least monitor the server and see what it's up to
and quickly and easily see problems.

7) If you decide to stay with MyISAM and not InnoDB then you will want as
much memory as you can in the Key Buffer while leaving some space in the
sort/read/join buffers.. I'd up the sort/read/join buffers to maybe
10MBytes, or even 20Mbytes, if you need to drop Key buffer to 1500M to give
you the space for the others. We got OKish results on MyISAM with the larger
sort/read/join buffers - InnoDB made all the difference though.

Before giving you our settings I do want to point out one 

Re: Memory Problems on a G5/OSX/MySql4.0.17

2004-01-28 Thread Bruce Dembecki
On 1/28/04 10:29 AM, stairwaymail-mysql at yahoo dot com wrote:

So should we always use InnoDB over BerkeleyBD? I was
under the impression Berkeley was faster and better at
handling transactions.

Dan 


Eermm... That's outside my scope of expertise, my experiences have been
exclusively with InnoDB and before that MyISAM, and we don't do
transactions. 

The point I was making by mentioning the transaction side of things was in
response to the earlier comments that InnoDB might help the person out if
they do transactions. Most mention of InnoDB comes into play when people
want transactions, but it turns out InnoDB is much better at large databases
than MyISAM in many many situations. All I was saying is that InnoDB isn't
JUST about transactions.

Best Regards, Bruce
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB caused crash and left me a log entry...

2003-11-25 Thread Bruce Dembecki
Thanks Heikki, I'll send those along a little later today. I believe that
there is a second row in the table that has the name value Technical
Questions: API... If I am reading your comments correctly we are running
into a problem where the Key is limited to the first ten characters and
therefore despite the name having different values the first 10 characters
would be the same, thus causing the confusion with the key.

The table was designed pre 4.0.14, this particular one was likely deployed
originally under 4.0.13. Essentially we have about 40 deployments of
databases with identical structure for different clients, about half of them
for the same client just with instances for different countries and
purposes... So this is something we want to understand carefully.

As for operations to the table... This particular table doesn't see a lot of
changes. We run discussion boards, and this table holds the top level
listings for all the boards, or the list of Forums as we call it. Within
each forum there are multiple threads and messages, which have their own
tables, most of our table changes occur here. It's quite likely that once a
product is launched the Forums table doesn't get changed for months at a
time. That said I routinely (aka monthly) run a script which walks through
all our databases and tables and empties our InnoDB file space by
sequentially doing ALTER TABLE TYPE=myisam. Once completed the script
walks through again and turns them back into InnoDB. The intent here is just
to clean up the file space, rebuild the indexes etc... This procedure
typically increases our free space in the InnoDB file space and improves
performance significantly. So despite the fact the data stored in the table
hasn't changed since this database was launched, the table has been swapped
between InnoDB and MyISAM on a monthly basis.

Last night we were changing the data structure as the client was changing
the format of their discussion boards... It's a rare thing, but it happens.
Anyway I'll send you the table dump under separate cover later this morning.

Already your explanation makes sense given what I know about the other
record having a similar value.

Best Regards, Bruce

On 11/25/03 1:29 AM, Heikki Tuuri [EMAIL PROTECTED] wrote:

 Bruce,
 
 I am not able to repeat the crash. I tested on Linux with 4.0.17.
 
 You have the index
 
 KEY `jiveForum_name_idx` (`name`(10)),
 
 The bug is probably in the column prefix index. That feature was introduced
 in 4.0.14. Did you create the table with a version  4.0.14? What kinds of
 operations have you done with the table? Updates, deletes?
 
 The failing assertion is the one below. InnoDB has determined that a
 secondary index record is alphabetically equal to the new value (which is
 understandable, because in your update the first 10 characters do not
 change), but it turns out that the field length is NOT the same in the
 updated value.
 
 If I cannot repeat the crash, I will add diagnostic code to that place
 anyway, so that we get more information of the bug. Also note that OS X is
 not as well tested as Linux. There may be file corruption bugs in OS X.
 
 Please send me a dump of the table for more testing, and also your my.cnf.
 
 Best regards,
 
 Heikki Tuuri
 Innobase Oy
 http://www.innodb.com
 Foreign keys, transactions, and row level locking for MySQL
 InnoDB Hot Backup - hot backup tool for InnoDB which also backs up MyISAM
 tables
 
 Order MySQL technical support from https://order.mysql.com/
 
 
 
 /***
 Builds an update vector from those fields which in a secondary index entry
 differ from a record that has the equal ordering fields. NOTE: we compare
 the fields as binary strings! */
 
 upd_t*
 row_upd_build_sec_rec_difference_binary(
 /**/
   /* out, own: update vector of differing
   fields */
   dict_index_t*   index,  /* in: index */
   dtuple_t*   entry,  /* in: entry to insert */
   rec_t*  rec,/* in: secondary index record */
   mem_heap_t* heap)   /* in: memory heap from which allocated */
 {
   upd_field_t*upd_field;
   dfield_t*   dfield;
   byte*   data;
   ulint   len;
   upd_t*  update;
   ulint   n_diff;
   ulint   i;
 
   /* This function is used only for a secondary index */
   ut_ad(0 == (index-type  DICT_CLUSTERED));
 
   update = upd_create(dtuple_get_n_fields(entry), heap);
 
   n_diff = 0;
 
   for (i = 0; i  dtuple_get_n_fields(entry); i++) {
 
   data = rec_get_nth_field(rec, i, len);
 
   dfield = dtuple_get_nth_field(entry, i);
 
   ut_a(len == dfield_get_len(dfield));
 
 
 - Original Message -
 From: Bruce Dembecki [EMAIL PROTECTED]
 Newsgroups: mailing.database.myodbc
 Sent: Tuesday, November 25, 2003 7:28

InnoDB caused crash and left me a log entry...

2003-11-24 Thread Bruce Dembecki
InnoDB seems to have crashed on us, and put the errors below into the log
files... It took several crashes and some time but I was able to isolate the
extremely simple query involved.

Server is OSX 10.3 running on a Dual 200MHZ G5 with 4Gigs ram. MySQL is
4.0.16.

The original queries to crash were a little longer in that they also updated
the field description with a large block of text as well as the field
name. The following Queries repeatedly caused a crash when executed from
the mysql command line directly to mysqld with no other client connections
open:

update jiveForum set name='Technical Questions: SDK' where forumID=4;
update jiveForum set name=Technical Questions: SDK where forumID=4;
update jiveForum set name='Technical Questions - SDK' where forumID=4;
update jiveForum set name='Technical Questions' where forumID=4;

These queries did not cause the crash:

update jiveForum set name='Technical' where forumID=4;
update jiveForum set name='SDK Questions' where forumID=4;

I eventually moved the table out of InnoDB, made the changes, and moved it
back to InnoDB again without experiencing further problems. In order to
restore client services I did not try again to reproduce the problem,
instant Database server crash is not an attractive experience, especially in
our production environment.

The table in question has 12 rows and looks like this:

CREATE TABLE `jiveForum` (
  `forumID` bigint(20) NOT NULL default '0',
  `name` varchar(255) binary NOT NULL default '',
  `description` text,
  `modDefaultThreadVal` bigint(20) NOT NULL default '0',
  `modMinThreadVal` bigint(20) NOT NULL default '0',
  `modDefaultMsgVal` bigint(20) NOT NULL default '0',
  `modMinMsgVal` bigint(20) NOT NULL default '0',
  `creationDate` varchar(15) NOT NULL default '',
  `modifiedDate` varchar(15) NOT NULL default '',
  `categoryID` bigint(20) NOT NULL default '1',
  `categoryIndex` int(11) NOT NULL default '0',
  PRIMARY KEY  (`forumID`),
  UNIQUE KEY `name` (`name`),
  KEY `jiveForum_name_idx` (`name`(10)),
  KEY `jiveForum_cat_idx` (`categoryID`),
  KEY `jiveForum_catIndex_idx` (`categoryIndex`)
) TYPE=InnoDB

Below are the logs of the first two crashes and some additional comments
from yours truly.

Best Regards, Bruce

031124 16:27:18  InnoDB: Assertion failure in thread 2167428608 in file
row0upd.c line 713
InnoDB: Failing assertion: len == dfield_get_len(dfield)
InnoDB: We intentionally generate a memory trap.
InnoDB: Send a detailed bug report to [EMAIL PROTECTED]
InnoDB: Thread 2168107008 stopped in file ha_innodb.cc line 396
InnoDB: Thread 8529408 stopped in file sync0arr.c line 126
InnoDB: Thread 8954880 stopped in file ha_innodb.cc line 396
InnoDB: Thread 12103168 stopped in file ha_innodb.cc line 396
InnoDB: Thread 8530432 stopped in file ../../innobase/include/sync0sync.ic
line 109
InnoDB: Thread 11085312 stopped in file ha_innodb.cc line 396
InnoDB: Thread 12269056 stopped in file ha_innodb.cc line 396
InnoDB: Thread 9271296 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166052864 stopped in file ha_innodb.cc line 396
InnoDB: Thread 15272448 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166610432 stopped in file ha_innodb.cc line 396
InnoDB: Thread 14812160 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166472704 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166551552 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2167671808 stopped in file ha_innodb.cc line 396
InnoDB: Thread 15742976 stopped in file ha_innodb.cc line 396
InnoDB: Thread 15535616 stopped in file ha_innodb.cc line 396
InnoDB: Thread 16771584 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166734336 stopped in file ha_innodb.cc line 396
InnoDB: Thread 15129088 stopped in file ha_innodb.cc line 396
InnoDB: Thread 13153792 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166089216 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166471680 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2167336448 stopped in file ha_innodb.cc line 396
InnoDB: Thread 10588160 stopped in file ha_innodb.cc line 396
InnoDB: Thread 9951744 stopped in file ha_innodb.cc line 396
InnoDB: Thread 9901056 stopped in file ha_innodb.cc line 396
InnoDB: Thread 14827520 stopped in file ha_innodb.cc line 396
InnoDB: Thread 12161536 stopped in file ha_innodb.cc line 396
InnoDB: Thread 9560576 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2164816384 stopped in file ha_innodb.cc line 396
InnoDB: Thread 13114368 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2164369408 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166552576 stopped in file ha_innodb.cc line 396
InnoDB: Thread 2166132224 stopped in file ha_innodb.cc line 396
InnoDB: Thread 9431552 stopped in file ha_innodb.cc line 396
InnoDB: Thread 9461760 stopped in file ha_innodb.cc line 396
InnoDB: Thread 14631424 stopped in file ha_innodb.cc line 396
InnoDB: Thread 9867264 stopped in file ha_innodb.cc line 396
InnoDB: Thread 13056512 stopped in file 

MacOSX 4.0.15 mysqld_safe restarts after STOP

2003-10-08 Thread Bruce Dembecki
Hi! I'm struggling to understand how to fix the mysqld_safe script for an
OSX machine. If I run mysql.server stop then the mysqld process is killed
and mysqld_safe promptly starts it again. I know mysqld_safe is supposed to
start mysqld if it stops but I also know there are times I need to work on
files and so on and when you need to stop it you need to stop it...

I'm afraid the scripting in mysqld_safe is beyond my ability to figure out.

mysql.safe I can work with easily enough, and I was planning to kill the
mysqld_safe process when I got a confirmation that mysqld had stopped (when
the pid file disappeared), however I don't have a way (that I know) to tell
the PID of the sh that mysqld_safe is running under to add the kill to the
script, and I don't want to blindly start killing shells in the hope I get
the one running mysqld_safe.

On our Solaris systems this isn't a problem and the the mysql.server stop
script stops mysqld and mysqld_safe then decides to quit... I don't know how
to fix it for OSX... I'm happy to add something to the relevant part of
mysqld.server to kill mysqld_safe's shell if I knew the right shell to kill.

Any ideas here?

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Case (in)sensitive table names, 4.0.15, OSX, InnoDB

2003-09-26 Thread Bruce Dembecki
Hi. We are migrating our Solaris setup to an OSX server. I used InnoDB Hot
Backup to copy the InnoDB files, and copied the .frm files for each of the
databases.

Under OSX I connect top the server and it sees the databases. If I use
some_database where some_database has mixed case table names I see errors
such as this:

mysql use some_database
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Didn't find any fields in table 'deletedForum'
Didn't find any fields in table 'deletedForumProp'
Didn't find any fields in table 'deletedMessage'
Didn't find any fields in table 'deletedMessageProp'

If I show tables; I get a list such as this:

mysql show tables;
+-+
| Tables_in_some_database |
+-+
| deletedForum|
| deletedForumProp|
| deletedMessage  |
| deletedMessageProp  |


If I try to select data in a field I get errors such as this:

mysql select * from deletedMessage;
ERROR 1146: Table 'some_database.deletedmessage' doesn't exist

If I show table status; I get information such as this (sorry, this is
going to be messy - but you get the idea):

mysql show table status;
+-+--++--++-
+-+--+---++-
+-+++---
-+
| Name| Type | Row_format | Rows | Avg_row_length |
Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
Create_time | Update_time | Check_time | Create_options | Comment
|
+-+--++--++-
+-+--+---++-
+-+++---
-+
| deletedForum| NULL | NULL   | NULL |   NULL |
NULL |NULL | NULL |  NULL |   NULL | NULL
| NULL| NULL   | NULL   | Table
'some_database.deletedforum' doesn't exist|
| deletedForumProp| NULL | NULL   | NULL |   NULL |
NULL |NULL | NULL |  NULL |   NULL | NULL
| NULL| NULL   | NULL   | Table
'some_database.deletedforumprop' doesn't exist|
| deletedMessage  | NULL | NULL   | NULL |   NULL |
NULL |NULL | NULL |  NULL |   NULL | NULL
| NULL| NULL   | NULL   | Table
'some_database.deletedmessage' doesn't exist  |
| deletedMessageProp  | NULL | NULL   | NULL |   NULL |
NULL |NULL | NULL |  NULL |   NULL | NULL
| NULL| NULL   | NULL   | Table
'some_database.deletedmessageprop' doesn't exist  |


In another database which has only lower case table names I have no problems
and all works well. Given I ask to select data from deletedMessage and it
tells me there is no table .deletedmessage something somewhere dropped the
uppercase in the table name...

The idea of making all the table names lower case isn't a good one, there
are 50+ databases, with 40+ tables each, and over 100 applications that call
these databases all with innumerable table calls.

I don't know if it's an InnoDB issue, an OSX issue or what's going on or how
I might solve it. I don't seem to see anything on the list archive about
this. Any ideas?

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Case (in)sensitive table names, 4.0.15, OSX, InnoDB

2003-09-26 Thread Bruce Dembecki
Further examination of the documents reveals a variable that addresses
this... I fixed it with this entry in my.cnf:

set-variable= lower_case_table_name=0

According to the MySQL manual lower_case_table_name defaults to 0 for all
instances except Windows, where it defaults to 1... Not clear why this
instance was behaving as if it was set to 1.

Best Regards, Bruce

On 9/25/03 11:27 PM, Bruce Dembecki [EMAIL PROTECTED] wrote:

 Hi. We are migrating our Solaris setup to an OSX server. I used InnoDB Hot
 Backup to copy the InnoDB files, and copied the .frm files for each of the
 databases.
 
 Under OSX I connect top the server and it sees the databases. If I use
 some_database where some_database has mixed case table names I see errors such
 as this:
 
 mysql use some_database
 Reading table information for completion of table and column names
 You can turn off this feature to get a quicker startup with -A
 
 Didn't find any fields in table 'deletedForum'
 Didn't find any fields in table 'deletedForumProp'
 Didn't find any fields in table 'deletedMessage'
 Didn't find any fields in table 'deletedMessageProp'
 
 If I show tables; I get a list such as this:
 
 mysql show tables;
 +-+
 | Tables_in_some_database |
 +-+
 | deletedForum|
 | deletedForumProp|
 | deletedMessage  |
 | deletedMessageProp  |
 
 
 If I try to select data in a field I get errors such as this:
 
 mysql select * from deletedMessage;
 ERROR 1146: Table 'some_database.deletedmessage' doesn't exist
 
 If I show table status; I get information such as this (sorry, this is going
 to be messy - but you get the idea):
 
 mysql show table status;
 +-+--++--++---
 --+-+--+---++-
 +-+++-
 ---+
 | Name| Type | Row_format | Rows | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Create_options | Comment
 |
 +-+--++--++---
 --+-+--+---++-
 +-+++-
 ---+
 | deletedForum| NULL | NULL   | NULL |   NULL |
 NULL |NULL | NULL |  NULL |   NULL | NULL
 | NULL| NULL   | NULL   | Table
 'some_database.deletedforum' doesn't exist|
 | deletedForumProp| NULL | NULL   | NULL |   NULL |
 NULL |NULL | NULL |  NULL |   NULL | NULL
 | NULL| NULL   | NULL   | Table
 'some_database.deletedforumprop' doesn't exist|
 | deletedMessage  | NULL | NULL   | NULL |   NULL |
 NULL |NULL | NULL |  NULL |   NULL | NULL
 | NULL| NULL   | NULL   | Table
 'some_database.deletedmessage' doesn't exist  |
 | deletedMessageProp  | NULL | NULL   | NULL |   NULL |
 NULL |NULL | NULL |  NULL |   NULL | NULL
 | NULL| NULL   | NULL   | Table
 'some_database.deletedmessageprop' doesn't exist  |
 
 
 In another database which has only lower case table names I have no problems
 and all works well. Given I ask to select data from deletedMessage and it
 tells me there is no table .deletedmessage something somewhere dropped the
 uppercase in the table name...
 
 The idea of making all the table names lower case isn't a good one, there are
 50+ databases, with 40+ tables each, and over 100 applications that call these
 databases all with innumerable table calls.
 
 I don't know if it's an InnoDB issue, an OSX issue or what's going on or how I
 might solve it. I don't seem to see anything on the list archive about this.
 Any ideas?
 
 Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication troubles

2003-06-13 Thread Bruce Dembecki
I thought I had most things figured out for our challenging replication
setup. However this morning we have a failure I can't figure out... Here are
the errors:

030613  5:13:50  Slave I/O thread: connected to master '[EMAIL PROTECTED]:3306',
replication started in log 'binary-log.035' at position 11496356
030613  5:13:50  Error reading packet from server: log event entry exceeded
max_allowed_packet; Increase max_allowed_packet on master
(server_errno=1236)
030613  5:13:50  Got fatal error 1236: 'log event entry exceeded
max_allowed_packet; Increase max_allowed_packet on master' from master when
reading data from binary log
030613  5:13:50  Slave I/O thread exiting, read up to log 'binary-log.035',
position 11496356


In the case of both the master and the slave the max_allowed_packet is set
to 1047552. In both cases I raised it to 2047552 just to be generous. Same
failure.

I can stop and start this slave as it's not in live production, I can't stop
and start the master which feeds live discussion boards.

Any ideas?

Best Regards, Bruce


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]