While I am using bareos with postgres database, my hint will suit for mysql
too.
I've slightly modified /usr/lib/bareos/scripts/make_catalog_backup.pl
script to pipe dump to gzip, like:
exec("HOME='$wd' pg_dump -c | gzip > '$wd/$args{db_name}.sql.gz'");
The corresponding Catalog.conf was modified to backup *.gz version.
Catalog backup time and required disk size dramatically reduced.
If you concern about your index size, it is healthy to any database to
rebuild indexes periodically.
You can find index create statement in /usr/lib/bareos/scripts/ddl/creates/
sql scripts
On Thursday, June 11, 2020 at 3:29:30 PM UTC+3, Kai Zimmer wrote:
>
> Hi,
>
> in former times i used bareos with a mysql database backend. However it
> became too slow and i switched to a secondary postgres catalogue. I need
> to keep the mysql database as a history though.
>
> Now i'm switching from Ubuntu 16.04 (mysql 5.7) to Ubuntu 20.04 (mysql
> 8.0) and i'm unable to start the mysqld server because of incompatible
> data structures. I tried dumping the database on another Ubuntu 16.04
> machine, but the SQL-dump file is only about 128 gb in size, although
> the binary index files are > 200 GB in size.
>
> Is there a known limit in mysqldump? I'm using ext4 file system which is
> capable of 16 Tb single file sizes.
>
> Best,
>
> Kai
>
>
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/bareos-users/514f4a5b-b3b2-45c1-ae5b-64f94a76d8d6o%40googlegroups.com.