On 03/21/13 08:43, Konstantin Khomoutov wrote:
> On Thu, 21 Mar 2013 07:40:01 -0400
> Phil Stracchino wrote:
>
> [...]
>> Try this for a faster method:
>>
>> CREATE TABLE NewFile LIKE File;
>> INSERT INTO NewFile (SELECT * FROM File);
>> DROP TABLE File;
>> RENAME TABLE NewFile TO File;
>
> I th
On Thu, 21 Mar 2013 14:46:04 +0100
Uwe Schuerkamp wrote:
> > CREATE TABLE NewFile LIKE File;
> > INSERT INTO NewFile (SELECT * FROM File);
> > DROP TABLE File;
> > RENAME TABLE NewFile TO File;
> >
>
> Thanks Phil, I was a bit anxious about the select into bit because the
> server ran out of me
On Thu, Mar 21, 2013 at 07:40:01AM -0400, Phil Stracchino wrote:
> Try this for a faster method:
>
> CREATE TABLE NewFile LIKE File;
> INSERT INTO NewFile (SELECT * FROM File);
> DROP TABLE File;
> RENAME TABLE NewFile TO File;
>
Thanks Phil, I was a bit anxious about the select into bit because
On Thu, 21 Mar 2013 07:40:01 -0400
Phil Stracchino wrote:
[...]
> Try this for a faster method:
>
> CREATE TABLE NewFile LIKE File;
> INSERT INTO NewFile (SELECT * FROM File);
> DROP TABLE File;
> RENAME TABLE NewFile TO File;
I think the "ALTER TABLE" step is missing, and this should be:
CREA
On 03/21/13 04:27, Uwe Schuerkamp wrote:
> Hi folks,
>
> last night I managed to shrink a 300GB File table down to a very
> reasonable 85GB by exporting, dropping and re-importing the table
> (about 500 million rows).
>
> Dumping the table on an idle bacula server (see earlier posts for
> machin
On Tue, Feb 26, 2013 at 04:24:02PM +, Alan Brown wrote:
> On 26/02/13 09:42, Uwe Schuerkamp wrote:
>
> >I wonder if dumping the file table and then
> >re-importing it to an innodb replacement would have been quicker?
>
> In general: Yes.
>
>
Hi folks,
last night I managed to shrink a 300G
On Tue, Mar 05, 2013 at 11:26:45AM +, Alan Brown wrote:
> On 01/03/13 11:16, Uwe Schuerkamp wrote:
> There's no real need for that. If you tell mysqldump to use compatible
> dump formats then postgres will import it. (mysqldump --compatible=(name))
>
Hi Alan, thanks for the heads-up.
> I'
Another perspective...
I've personally found that if your memory is limited (my bacula db
server has 8Gb of RAM) that, for a bacula database, mysql performs
_better_ than postgres. My File table currently has 2,856,394,323 rows.
I've seen so many recommendations here and elsewhere about post
On Tue, Feb 26, 2013 at 04:23:20PM +, Alan Brown wrote:
> On 26/02/13 09:42, Uwe Schuerkamp wrote:
>
>
> >for the record I'd like to give you some stats from our recent myisam
> >-> innodb conversion.
>
>
> For the sizes you're talking about, I'd recommend:
>
> 1: A _lot_ more memory. 100G
On Tue, Feb 26, 2013 at 04:24:02PM +, Alan Brown wrote:
> On 26/02/13 09:42, Uwe Schuerkamp wrote:
>
> >I wonder if dumping the file table and then
> >re-importing it to an innodb replacement would have been quicker?
>
> In general: Yes.
>
>
Hi folks,
it looks like moving over to InnoDB h
Hello,
> key_buffer = 1G
> innodb_file_per_table
> innodb_flush_method=O_DIRECT
> innodb_flush_log_at_trx_commit=2
> innodb_buffer_pool_size=12G
> innodb_log_buffer_size=4M
> innodb_thread_concurrency=8
You can tune innodb_log_file_size. Beware, you have to shut mysql down
correctly and move red
Hi folks,
for the record I'd like to give you some stats from our recent myisam
-> innodb conversion.
The server in question runs mariadb 5.5.27 on centos 6 64bit (latest
patches), has 18GB of RAM with a RAID5 built over 8 internal SAS disks
(7 + one spare) which stores the mysql tables.
Our f
12 matches
Mail list logo