Re: [Bacula-users] EXTERNAL - Re: /var/lib/mysql at 100%

2020-07-03 Thread Phil Stracchino
On 2020-07-03 20:38, Chaz Vidal wrote:
> Thank you for the response Phil,
> 
> We do not have the option of adding disks space to the system now.  I do have 
> a large /var/spool/bacula directory if that helps?
> 
> I don't believe we have any binary log files in this filesystem that's worth 
> purging.  It is the bacula directory and ibadta1 file that is chewing up all 
> the space:
> 
> /var/lib/mysql# ls -larth
> total 125G
> drwx--  2 mysql mysql 4.0K Jul 22  2015 lost@002bfound
> -rw-r--r--  1 mysql mysql0 Feb 10 19:47 debian-10.3.flag
> drwx--  2 mysql mysql 4.0K Feb 10 19:47 mysql
> drwx--  2 mysql mysql 4.0K Feb 10 19:47 performance_schema
> -rw-rw  1 mysql mysql 428K Feb 11 14:07 ib_buffer_pool
> -rw-r-  1 mysql mysql0 Feb 11 14:56 multi-master.info
> -rw---  1 mysql mysql   16 Feb 11 14:56 mysql_upgrade_info
> drwx--  6 mysql mysql 4.0K Feb 11 15:29 .
> -rw-rw  1 mysql mysql  24K Feb 11 15:29 tc.log
> drwxr-xr-x 45 root  root  4.0K Feb 17 21:42 ..
> drwx--  2 mysql mysql 4.0K Jun 26 23:54 bacula
> -rw-rw  1 mysql mysql  88K Jul  3 23:03 aria_log.0001
> -rw-rw  1 mysql mysql   52 Jul  3 23:03 aria_log_control
> -rw-rw  1 mysql mysql 4.4G Jul  4 08:06 ibtmp1
> -rw-rw  1 mysql mysql 120G Jul  4 09:58 ibdata1
> -rw-rw  1 mysql mysql  48M Jul  4 09:58 ib_logfile0
> -rw-rw  1 mysql mysql  48M Jul  4 09:58 ib_logfile1
> 
> 
> I've checked the /etc/mysql/my.cnf config file and there is no line for 
> innodb_file_per_table which suggests we are not running this?
> 
> And as per your next step then it would have to be the dump and reload 
> option.  
> 
> If so, is the right process to stop bacula, run the dump and restore and 
> restart bacula?
> 
> Would doing any purging of old jobs help in this matter?


OK, if your filesystem is at 100% you're not going to be able to execute
any writes, which means you won't be able to purge jobs.

What's that tc.log?  Do you need it?

You have an aria.log, so you're running MariaDB.  Which release?

Is this an ext3/4 filesystem?  Using the root of an ext3/4 filesystem
for MySQL/MariaDB data causes problems because mysqld assumes that any
subdirectory of its data directory is a database, and so the ext*
filesystem's lost+found directory creates problems.  If you have to nuke
and repave anyway, you might consider switching the filesystem to XFS.


If you have no way to expand the filesystem (say, by adding an LVM
extent to it), and no way to free space on it (there's really nothing
significant in here to give you any leeway), then your options for
compacting your data are pretty much limited to dump-and-reload, but
without knowing the extent of fragmentation in the tablespace there's no
way to tell whether it will gain you enough to keep operating.  If you
have a disk-full condition and mysql/mariadb is stopped, it's a pretty
safe bet it's already packed everything it can into the global
tablespace.  You PROBABLY won't recover significant space by a
dump-and-reload in this case.


Do you have somewhere else on the system where you could allocate a new,
larger data directory?



-- 
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] EXTERNAL - Re: /var/lib/mysql at 100%

2020-07-03 Thread Chaz Vidal
Thank you for the response Phil,

We do not have the option of adding disks space to the system now.  I do have a 
large /var/spool/bacula directory if that helps?

I don't believe we have any binary log files in this filesystem that's worth 
purging.  It is the bacula directory and ibadta1 file that is chewing up all 
the space:

/var/lib/mysql# ls -larth
total 125G
drwx--  2 mysql mysql 4.0K Jul 22  2015 lost@002bfound
-rw-r--r--  1 mysql mysql0 Feb 10 19:47 debian-10.3.flag
drwx--  2 mysql mysql 4.0K Feb 10 19:47 mysql
drwx--  2 mysql mysql 4.0K Feb 10 19:47 performance_schema
-rw-rw  1 mysql mysql 428K Feb 11 14:07 ib_buffer_pool
-rw-r-  1 mysql mysql0 Feb 11 14:56 multi-master.info
-rw---  1 mysql mysql   16 Feb 11 14:56 mysql_upgrade_info
drwx--  6 mysql mysql 4.0K Feb 11 15:29 .
-rw-rw  1 mysql mysql  24K Feb 11 15:29 tc.log
drwxr-xr-x 45 root  root  4.0K Feb 17 21:42 ..
drwx--  2 mysql mysql 4.0K Jun 26 23:54 bacula
-rw-rw  1 mysql mysql  88K Jul  3 23:03 aria_log.0001
-rw-rw  1 mysql mysql   52 Jul  3 23:03 aria_log_control
-rw-rw  1 mysql mysql 4.4G Jul  4 08:06 ibtmp1
-rw-rw  1 mysql mysql 120G Jul  4 09:58 ibdata1
-rw-rw  1 mysql mysql  48M Jul  4 09:58 ib_logfile0
-rw-rw  1 mysql mysql  48M Jul  4 09:58 ib_logfile1


I've checked the /etc/mysql/my.cnf config file and there is no line for 
innodb_file_per_table which suggests we are not running this?

And as per your next step then it would have to be the dump and reload option.  

If so, is the right process to stop bacula, run the dump and restore and 
restart bacula?

Would doing any purging of old jobs help in this matter?

Appreciate the support/advice!

Cheers
Chaz



Chaz Vidal | ICT Infrastructure | Tel: +61-8-8128-4397 | Mob: +61-492-874-982 | 
chaz.vi...@sahmri.com

-Original Message-
From: Phil Stracchino  
Sent: Saturday, 4 July 2020 8:25 AM
To: bacula-users@lists.sourceforge.net
Subject: EXTERNAL - Re: [Bacula-users] /var/lib/mysql at 100%

[External email: Use caution with links and attachments]

On 2020-07-03 17:53, Chaz Vidal wrote:
> Hi All
>
> We've filled up the file system for MySQL although the backups appear 
> to still be running.
>
>
> Are you able to advise if the commands in the catalog maintenance 
> section of the Bacula manual will still be the recommended way to free 
> up some space?
>
>
> mysqldump -f --opt bacula > bacula.sql mysql bacula < bacula.sql rm -f 
> bacula.sql
>
> ​I assume that bacula-dir needs to be shutdown duringt his maintenance 
> period?
>
> Hoping for some advice, thanks!


OK, this is really not a good way to go about it.  It's of questionable safety 
to perform an operation like this with the filesystem 100% full.


First:  Do you have any binary logs or other logs on that filesystem?
Look for anything in /var/lib/mysql that can be flushed or purged.  If the 
MySQL error log is in that directory, which it often is, flush it and rotate 
it.  See if that frees up a little space.

Second:  Do you have any options for expanding the filesystem?
Particularly expanding it on the fly right now before you do anything else (LVM 
for instance)?  If you can expand it on the fly, do that now, before you do 
anything else.


Next order of business, are you using innodb_file_per_table?  If you are, and 
you're able to free up a little bit of working space, use 
information_schema.tables to get a list of tables sorted by data_length
+ index_length ascending, and start doing OPTIMIZE TABLE operations,
smallest tables first.  If you're lucky this will free up enough space to 
optimize the large tables as well.  If there isn't sufficient space, it'll just 
fail with no loss of data.

If you are NOT using file_per_table, or if the above method doesn't free enough 
space, then you'll have to dump and reload.  The above mysqldump instructions 
are very minimalist though and I would not recommend doing it that way.  --opt 
is on by default anyway, and -f means continue in spite of errors, something 
you probably DO NOT want to do when dumping a mission-critical database like 
your backup catalog.  Try using mysqldump -qQcER instead.  You may optionally 
add --skip-lock-tables --single-transaction.

If you are NOT already using innodb_file_per_table, I strongly advise 
converting to file_per_table before you reload.  If you are converting to 
file_per_table, you definitely want to dump and reload ALL databases, not just 
your bacula database, because you cannot shrink the global tablespace (the 
ibdata1 file), all you can do is delete and reinitialize it, and doing that 
will require a full reload of everything.




--
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] /var/lib/mysql at 100%

2020-07-03 Thread Phil Stracchino
On 2020-07-03 17:53, Chaz Vidal wrote:
> Hi All
> 
> We've filled up the file system for MySQL although the backups appear to
> still be running.
> 
> 
> Are you able to advise if the commands in the catalog maintenance
> section of the Bacula manual will still be the recommended way to free
> up some space?
> 
> 
> mysqldump -f --opt bacula > bacula.sql
> mysql bacula < bacula.sql
> rm -f bacula.sql
> 
> ​I assume that bacula-dir needs to be shutdown duringt his maintenance
> period?
> 
> Hoping for some advice, thanks!


OK, this is really not a good way to go about it.  It's of questionable
safety to perform an operation like this with the filesystem 100% full.


First:  Do you have any binary logs or other logs on that filesystem?
Look for anything in /var/lib/mysql that can be flushed or purged.  If
the MySQL error log is in that directory, which it often is, flush it
and rotate it.  See if that frees up a little space.

Second:  Do you have any options for expanding the filesystem?
Particularly expanding it on the fly right now before you do anything
else (LVM for instance)?  If you can expand it on the fly, do that now,
before you do anything else.


Next order of business, are you using innodb_file_per_table?  If you
are, and you're able to free up a little bit of working space, use
information_schema.tables to get a list of tables sorted by data_length
+ index_length ascending, and start doing OPTIMIZE TABLE operations,
smallest tables first.  If you're lucky this will free up enough space
to optimize the large tables as well.  If there isn't sufficient space,
it'll just fail with no loss of data.

If you are NOT using file_per_table, or if the above method doesn't free
enough space, then you'll have to dump and reload.  The above mysqldump
instructions are very minimalist though and I would not recommend doing
it that way.  --opt is on by default anyway, and -f means continue in
spite of errors, something you probably DO NOT want to do when dumping a
mission-critical database like your backup catalog.  Try using mysqldump
-qQcER instead.  You may optionally add --skip-lock-tables
--single-transaction.

If you are NOT already using innodb_file_per_table, I strongly advise
converting to file_per_table before you reload.  If you are converting
to file_per_table, you definitely want to dump and reload ALL databases,
not just your bacula database, because you cannot shrink the global
tablespace (the ibdata1 file), all you can do is delete and reinitialize
it, and doing that will require a full reload of everything.




-- 
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Cannot find any appendable volumes

2020-07-03 Thread Greg Woods
I am getting this error when running copy jobs. I have an "ARCHIVE-ALL" job
that searches for any client backup jobs that terminated normally and have
not yet been copied from online to archive backups, and runs copy jobs for
each. I have been using this system for 7 years now, and it has worked well
until today.

Here's the Pool definitions (File is the online pool, and Archive is a
vchanger with 6 removable disks as magazines):

Pool {
  Name = File
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # 1 year
  Maximum Volume Bytes = 50G # Limit Volume size to something
reasonable
  Maximum Volumes = 245# Limit number of Volumes in Pool
  Storage = BKUP
  Next Pool = Archive # Where to copy/migrate to
}
Pool {
  Name = Archive
  Pool Type = Backup
  Recycle = yes   # Bacula can automatically recycle
Volumes
  AutoPrune = yes # Prune expired volumes
  Volume Retention = 365 days # one year
  Maximum Volume Bytes = 50G # Limit Volume size to something
reasonable
  Maximum Volumes = 136# Limit number of Volumes in Pool
  Storage = ARCH
}

As you can see, the volume retention in the Archive pool is set to 365
days., and Recycle is set to yes, which means  (if I understand this
correctly) that any volume that hasn't been written to in more than a year
should be available to recycle. So why is it suddenly telling me that there
are no appendable volumes after 7 years of running this configuration?

All of my client definitions look like this:

Client {
  Name = "myhost"
  Address = myhost.my.domain
  Catalog = "My Catalog" # there is only one
  Password = "big secret"
  File Retention = 365 days
  Job Retention = 365 days
}

Did I miss something? There are definitely some volumes in  the Archive
pool that have not been written to in more than a year; shouldn't those be
available for recycling?

Lastly, what command would I use to let Bacula know that a given volume can
be made available? Eventually I have to solve my configuration issue, but
for now I'd like to just mark some old volumes as available manually so
that my copy jobs can continue.

Thanks for any pointers,
--Greg
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] /var/lib/mysql at 100%

2020-07-03 Thread Chaz Vidal
Hi All

We've filled up the file system for MySQL although the backups appear to still 
be running.


Are you able to advise if the commands in the catalog maintenance section of 
the Bacula manual will still be the recommended way to free up some space?


mysqldump -f --opt bacula > bacula.sql
mysql bacula < bacula.sql
rm -f bacula.sql


?I assume that bacula-dir needs to be shutdown duringt his maintenance period?

Hoping for some advice, thanks!

Chaz


?



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] S3 Configuration for Bacula 9.6.5 on Debian 10

2020-07-03 Thread sruckh--- via Bacula-users
On 2020-07-02 15:53, r0...@nxlplyx.com wrote:

> I have compiled Bacula 9.6.5 on Debian 10, with a lot of help from the folks 
> here. 
> 
> Now I cannot get S3 to work, probably my own lack of knowledge. 
> 
> I can back up and restore to local directories, that is no problem. 
> 
> However I am unable to get th S3 module to backup to an S3 server.  I can use 
> Fuse S3 to mount the S3 storage and move files back and forth.
> 
> === 
> 
> From ./bconsole
> 
> The defined Storage resources are:
> 1: storage_dev_S3_nd1
> 2: File1
> 3: File2
> Select Storage resource (1-3): 1
> Connecting to Storage daemon storage_dev_S3_nd1 at debian:9103 ...
> 
> (hangs and no log is produced) 
> 
> I can't figure out what else to do. Can anyone help?
> 
> =
> 
> bacula_sd.conf
> 
> Device {
> Name = dev_S3_nd1
> Device Type = Cloud
> Cloud = S3Cloud
> Archive Device = /home/user1/dirNFS/bu/bacula_cloudstorage_test
> Maximum Part Size = 1000
> Media Type = CloudType
> LabelMedia = yes;
> Random Access = Yes;
> AutomaticMount = yes
> RemovableMedia = no
> AlwaysOpen = no
> }
> 
> Cloud {
> Name = S3Cloud
> Driver = "S3"
> HostName =  with Fuse S3)
> BucketName = "test_bucket"
> Access Key = 
> Secret Key = 
> Protocol = HTTPS
> UriStyle = VirtualHost
> Truncate Cache = No
> Upload = EachPart
> Region = "us-central-1"
> MaximumUploadBandwidth = 5MB/s
> }
> 
> ==
> 
> bacula_dir.conf
> 
> Storage {
> Name = storage_dev_S3_nd1
> Address = debian
> SD Port = 9103
> Password = 
> Device = dev_S3_nd1
> Media Type = CloudType
> }
> 
> Job {
> Name = "BackupClient1-to-Cloud"
> JobDefs = "DefaultJob"
> Storage = storage_dev_S3_nd1
> Client = debian-fd
> }

Here is what I have configured -- 

baculda-sd-conf 

Cloud {
Name = BackBlaze-AWS
Driver = "S3"
HostName = "somehost.com"
BucketName = "somebucket"
AccessKey = "AccessKey"
SecretKey = "SecretKey"
Protocol = HTTPS
Region = "someregion"
Upload = EachPart
} 

Device {
Name = B2-AWS
Device Type = Cloud
Cloud = BackBlaze-AWS
Archive Device = /path/to/local/backup/directory
Maximum Part Size = 10485760
Media Type = B2-AWS-Type
LabelMedia = yes
Random Access = Yes;
AutomaticMount = yes
RemovableMedia = no
AlwaysOpen = no
} 

bacula-dir.conf 

Autochanger {
Name = B2-AWS
Address = fqdn.directory
SDPort = 9103
Password = "mypassword"
Device = B2-AWS
Media Type = B2-AWS-Type
Maximum Concurrent Jobs = 10
Autochanger = B2-AWS
} 

Pool {
Name = B2-AWS-Pool
Pool Type = Backup
Recycle = yes
AutoPrune = yes
Volume Retention = 365 days
Maximum Volume Bytes = 268435456
Maximum Volumes = 8192
Storage = B2-AWS
CacheRetention = 30 days
LabelFormat = "B2-AWS-Vol-"
}___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Repo Bacula

2020-07-03 Thread Crystian Dorabiatto
Awesome, thanks!

Em sex, 3 de jul de 2020 03:49, Davide Franco  escreveu:

> Hello,
>
> Centos 6 packages for Bacula 9.6.5 are now available in the repo.
>
> Best regards
>
> Davide
>
> On Thu, 2 Jul 2020 at 22:49, Crystian Dorabiatto <
> geovani.dorabia...@gmail.com> wrote:
>
>> Ok, thanks.
>>
>> On Thu, Jul 2, 2020 at 5:46 PM Davide Franco  wrote:
>>
>>> Hello Geovani,
>>>
>>> I will build Centos 6 packages when I have a bit of time.
>>>
>>> Best,
>>>
>>> Davide
>>>
>>> On Wed, 1 Jul 2020 at 19:03, Crystian Dorabiatto <
>>> geovani.dorabia...@gmail.com> wrote:
>>>
 Good afternoon,

 I have a doubt.

 On repo RPM version 9.6.5. Will we have a version for CentOS 6 (EL6)?

 Thanks

 Regards.
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users

>>>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Errors Running Differential Backup Job with Accurate Set

2020-07-03 Thread Adolf Belka

Hallo Michael,

Yes I am using MySQL for the catalogue and Accurate=Yes option for all jobs. 
Has been running like that for a long time with no problems for Incremental and 
Differential as well as Full backups.

Regards

Adolf

Sent from my Desktop Computer

On 03/07/2020 09:01, Radosław Korzeniewski wrote:

Hello Michael,

pt., 3 lip 2020 o 06:12 Michael Williams mailto:mickwi...@gmail.com>> napisał(a):

Hello,

So it's been some time since I was last looking into this issue. I have finally setup 
another test VM running FreeBSD and installed Bacula using FreeBSD Ports, compiled with 
MYSQL as the database. Running a test, it has exactly the same issue with running an 
incremental or differential backup with the "Accurate = Yes".


Could you share your logs, please?

If you run your job with bad preconditions then I do not expect different 
results.

Is anyone else using MYSQL for the catalogue? If so, are you using the the 
'Accurate = Yes' job configuration option?


I do not know, because you sent this email to me directly and not to the user 
group.

best regards
--
Radosław Korzeniewski
rados...@korzeniewski.net 


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula Errors Running Differential Backup Job with Accurate Set

2020-07-03 Thread Radosław Korzeniewski
Hello Michael,

pt., 3 lip 2020 o 06:12 Michael Williams  napisał(a):

> Hello,
>
> So it's been some time since I was last looking into this issue. I have
> finally setup another test VM running FreeBSD and installed Bacula using
> FreeBSD Ports, compiled with MYSQL as the database. Running a test, it has
> exactly the same issue with running an incremental or differential backup
> with the "Accurate = Yes".
>

Could you share your logs, please?

If you run your job with bad preconditions then I do not expect different
results.


> Is anyone else using MYSQL for the catalogue? If so, are you using the the
> 'Accurate = Yes' job configuration option?
>

I do not know, because you sent this email to me directly and not to the
user group.

best regards
-- 
Radosław Korzeniewski
rados...@korzeniewski.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Repo Bacula

2020-07-03 Thread Davide Franco
Hello,

Centos 6 packages for Bacula 9.6.5 are now available in the repo.

Best regards

Davide

On Thu, 2 Jul 2020 at 22:49, Crystian Dorabiatto <
geovani.dorabia...@gmail.com> wrote:

> Ok, thanks.
>
> On Thu, Jul 2, 2020 at 5:46 PM Davide Franco  wrote:
>
>> Hello Geovani,
>>
>> I will build Centos 6 packages when I have a bit of time.
>>
>> Best,
>>
>> Davide
>>
>> On Wed, 1 Jul 2020 at 19:03, Crystian Dorabiatto <
>> geovani.dorabia...@gmail.com> wrote:
>>
>>> Good afternoon,
>>>
>>> I have a doubt.
>>>
>>> On repo RPM version 9.6.5. Will we have a version for CentOS 6 (EL6)?
>>>
>>> Thanks
>>>
>>> Regards.
>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users