I have a question about:
> If you want to regain some of the space used by the INNODB file you
> will have to convert all INNODB tables to MYISAM (or dump them to
> a SQL file), recreate the INNODB file (s) and then recreate the
> original INNODB tables.
So, just to be clear, is this the right procedure:
1 - Dump INNODB tables to SQL, double and triple check integrity
2 - Shut down MySQL
3 - Remove data and log files at the shell level:
ib_logfile0
ib_logfile1
innodb_data_1
4 - Start MySQL
5 - Regenerate tables from SQL dumped in step 1
I assume I could also rename the files in step 3, just in case,
right?
Jeff;
On Fri, 8 Oct 2004 16:34:31 +0300, "Dobromir Velev" <[EMAIL PROTECTED]>
said:
> Hi,
> According to the manual -
> http://dev.mysql.com/doc/mysql/en/OPTIMIZE_TABLE.html
> http://dev.mysql.com/doc/mysql/en/InnoDB_File_Defragmenting.html
>
> running a null ALTER statement - ALTER TABLE tbl-name type=INNODB; will
> rebuild the table thus optimizing the way the table is written to the
> disk.
> It will fix the physical ordering of the index pages on the disk thus
> improving the time MySQL needs to perform an index seek. It will not
> decrease
> the space used by the INNODB file but it could speed things up. If you
> want
> to regain some of the space used by the INNODB file you will have to
> convert
> all INNODB tables to MYISAM (or dump them to a SQL file), recreate the
> INNODB
> file (s) and then recreate the original INNODB tables. This process could
> take a lot of time depending on the size of your tables so you should
> proceed with care.
>
>
> HTH
>
> --
> Dobromir Velev
> [EMAIL PROTECTED]
> http://www.websitepulse.com/
>
> On Thursday 07 October 2004 22:07, Boyd E. Hemphill wrote:
> > The documentation is not clear on this point. Here is a quote:
> >
> > 'For BDB tables, OPTIMIZE TABLE is currently mapped to ANALYZE TABLE. It
> > was also the case for InnoDB tables before MySQL 4.1.3; starting from this
> > version it is mapped to ALTER TABLE.'
> >
> > What is meant by its being mapped to ALTER TABLE? Too, what exactly
> > happens after 4.1.3? Is space, in fact, recovered and defragged?
> >
> > Thanks for your time!
> >
> > Best Regards,
> > Boyd E. Hemphill
> > MySQL Certified Professional
> > [EMAIL PROTECTED]
> > Triand, Inc.
> > www.triand.com
> > O: (512) 248-2278
> > M: (713) 252-4688
> >
> > -----Original Message-----
> > From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
> > Sent: Wednesday, October 06, 2004 6:23 PM
> > To: 'Mysql List'
> > Subject: Re: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes
> >
> > Ed Lazor wrote:
> > >>-----Original Message-----
> > >>From: Christopher L. Everett [mailto:[EMAIL PROTECTED]
> > >>Sent: Wednesday, October 06, 2004 1:47 AM
> > >>To: Mysql List
> > >>Subject: Repeated corruption with MySQL 4.1.x using FULLTEXT indexes
> > >>
> > >>I have an application where I create a faily large table (835MB) with a
> > >>fulltext index. One of our development workstations and our production
> > >>server will run the script to load the table, but afterwards we have a
> > >>pervasive corruption, with out of range index index pointer errors.
> > >>Oddly, my development workstation doesn't have those problems.
> > >>
> > >>My box and the ones having the problems have the following differences:
> > >>
> > >> - my box runs ReiserFS, the problem boxes run XFS
> > >> - my box has a nice SCSI HD subsystem, the problem boxes do IDE.
> > >>
> > >>All three boxes run Linux 2.6.x kernels, and my workstation and
> > >> production server share the same mobo. Come to think of it, I saw
> > >> similar corruption issues under 2.4.x series kernels and MySQL v4.0.x,
> > >> it just wasn't the show stopper it is now.
> > >>
> > >>Also, on all three boxes, altering the table to drop an index and create
> > >>a new one requires a "myisamchk -rq" run afterwards when a fulltext index
> > >>either exists or gets added or dropped, which I'd also call a bug.
> > >
> > >The problems you're describing are similar to what I've run into when
> > > there have been hardware related problems.
> > >
> > >One system had a problem with ram. Memory tests would test and report ram
> > >as ok, but everything started working when I replaced the ram. I think it
> > >was just brand incompatibility or something odd, because the ram never
> > > gave any problems in another system.
> >
> > I can generate the problem on much smaller data sets, in the mid tens of
> > thousands of records rather than the millions of records.
> >
> > I'll do a memtest86 run on the development boxes overnight, but as I did
> > that
> > just after I installed linux on them and used the linux badram patch to
> > exclude
> > iffy sections of RAM, I don't think thats a problem.
> >
> > >One system had hard drive media slowly failing and this wasn't obvious
> >
> > until
> >
> > >we ran several full scan chkdsks.
> >
> > 3 hard drives all of different brand, model & size, and the problem
> > happening
> > in the same place on both? Not likely.
> >
> > >The funniest situation was where enough dust had collected in the CPU fan
> >
> > to
> >
> > >cause slight over heating, which resulted in oddball errors.
> >
> > This isn't a problem on my box. I have a 1.5 pound copper heatsink with a
> > 90mm heat sensitive fan and a fan+heatsink for the hard drive, and I saw
> > myisamchk consistently generate the same error in the same place over and
> > over. The sensors report my CPU running in the 45 degree centigrade range
> > on my box pretty consistently.
> >
> > >In each of these cases, everything would work fine until the system would
> > >start processing larger amounts of data. Small amounts of corruption
> > > began to show up that seemed to build on itself.
> > >
> > >This may or may not relate to what you're dealing with, but maybe it will
> > >help =)
> >
> > I'll look, but I don't think that's the problem. I'm going to see how
> > small
> > of a data set will cause this problem and file a bug report.
> >
> > --
> > Christopher L. Everett
> >
> > Chief Technology Officer www.medbanner.com
> > MedBanner, Inc. www.physemp.com
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
>
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]