[Bacula-users] Need S3 cloud Bacula Plugin

2023-12-04 Thread Nilkanth Lugade
Hi,

Need S3 cloud Bacula Plugin (bpipe), we have checked but not found yet ,
so please help for the same bpipe configuration source.


Regards,
Nilkanth
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread MylesDearBusiness via Bacula-users
I think that just did the trick, Rob.


I really appreciate your persistence, common sense is never common, 
especially the first time one tries to accomplish something.

I replaced the ArchiveDevice parameters to point to my 
/path/to/bacula/archive directory and launched a backup run.
This looks a lot more like I expect:

root@c1:~# !ls
ls -l /mnt/MylesDearDropBox/Backup/bacula/archive/
total 4926540
-rw-r--r-- 1 root root 1073737956 Dec  5 01:04 VolMpwrWare-0003
-rw-r--r-- 1 root root 1073737951 Dec  5 01:04 VolMpwrWare-0004
-rw-r--r-- 1 root root 1073737808 Dec  5 01:04 VolMpwrWare-0005
-rw-r--r-- 1 root root 1073737968 Dec  5 01:05 VolMpwrWare-0006
-rw-r--r-- 1 root root  749823210 Dec  5 01:05 VolMpwrWare-0007


Best,




(removed content from past emails to fit the listserv policy)






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread Rob Gerber
(resending, deleted quoted text from previous messages so my message will
pass the 40kb limit)

Myles,

1. Basically, I suspect rclone filled its cache and bacula stopped the
backup at that time. My guess is that if you were to run a backup of less
than 1GiB right now in bacula, it would succeed. Guaranteed not to fill the
rclone buffer. Maybe adjust rclone cache size or otherwise fiddle with that
portion of it? Definitely exclude the rclone cache (if written to disk)
from the bacula backup!

2. I think there may be a mistake in how your bacula-sd.conf defines the
bacula FileChgr1-Dev1 and FileChgr1-Dev2 devices.

Your bacula virtual devices (FileChgr1-Dev1 and FileChgr1-Dev2) point to
actual files on the system. Mine in the default config (working, backs up
bacula database routinely) points to the folder that holds volume files. In
my configuration the volume filenames are never defined in the FileChgr*
resources.

Did you create these files yourself?
root@c1:~# ls -l /mnt/MylesDearDropBox/Backup/bacula/archive/
total 20971544
-rw-r--r-- 1 root root 21474860756 Dec 4 03:27 MylesMpwrware1
-rw-r--r-- 1 root root 0 Dec 4 03:04 MylesMpwrware1.l
-rw-r--r-- 1 root root 0 Dec 4 01:00 MylesMpwrware2

I would expect bacula to create volume files itself, and that it doesn't
need to be told what the volume names will be (except for "Vol-" defined in
configuration elsewhere, as you've noted).

in your bacula-sd.conf, for both devices (FileChgr1-Dev1 and
FileChgr1-Dev2) I recommend adjusting your ArchiveDevice=  line to read the
same:
ArchiveDevice=  /mnt/MylesDearDropBox/Backup/bacula/archive # no trailing
foreward slash! "/"

FileChgr1-Dev1 and FileChgr1-Dev2 should both point to the same folder!
There are two such devices defined in part so that one could read
information from a volume and another could write to another volume, such
as in a copy or migration job. There may be other reasons, but in any case
the FileChgr1-Dev* devices should certainly point to the same folder.

In your current configuration, I think the Vol-0001 and Vol-0002 are labels
written to the volumes themselves internally, for bacula's reference. If
these were tapes, bscan or some other bacula tool could read the tapes
(volumes) and find the labels. The label name internally to bacula doesn't
necessarily match the actual filename or label written on the tape.
However, it usually DOES match in a default configuration, like mine. I
think once you make this change to the ArchiveDevice paths and let bacula
create volumes, the bacula labels will match the filenames of the volumes.

In the case of Vol-0002 it might be that bacula intended to write to the
volume but couldn't create it because FileChgr1-Dev1 and FileChgr1-Dev2
both point to actual files and NOT to a folder. Not sure.

For reference, here's what I see on my bacula system. See how I have files
in /opt/bacula/archive/
[root@NSF-rocky]# find /  -path /mnt -prune  -o -type f -print | grep
"Vol-0"
/opt/bacula/archive/Vol-0001
/opt/bacula/archive/Vol-0040

[root@NSF-rocky EVO-Media]# ls -lah /opt/bacula/archive/
total 15G
drwxrwxr-x+  3 bacula disk   4.0K Oct 20 12:25 .
drwxrwxr-x+ 11 root   root   4.0K Sep 13 14:17 ..
-rw-r-.  1 bacula disk   1.5G Sep  6 12:25 Vol-0001
-rw-r-.  1 bacula disk13G Dec  4 05:45 Vol-0040

in bconsole:
list volumes pool=File
Pool: File
+-++---+-++--+--+-+--+---+---+-+--+-++
| mediaid | volumename | volstatus | enabled | volbytes   | volfiles |
volretention | recycle | slot | inchanger | mediatype | voltype | volparts
| lastwritten | expiresin  |
+-++---+-++--+--+-+--+---+---+-+--+-++
|   1 | Vol-0001   | Read-Only |   1 |  1,248,082,548 |0 |
  31,536,000 |   1 |0 | 0 | File1 |   1 |0
| 2023-06-27 23:10:02 | 17,729,032 |
|  23 | Vol-0022   | Purged|   1 |  0 |0 |
  31,536,000 |   1 |0 | 0 | File1 |   1 |0
| |  0 |
|  24 | Vol-0024   | Read-Only |   1 |  0 |0 |
  31,536,000 |   1 |0 | 0 | File1 |   1 |0
| |  0 |
|  25 | Vol-0025   | Read-Only |   1 |  0 |0 |
  31,536,000 |   1 |0 | 0 | File1 |   1 |0
| |  0 |
|  26 | Vol-0026   | Read-Only |   1 |  0 |0 |
  31,536,000 |   1 |0 | 0 | File1 |   1 |0
| |  0 |
|  27 | Vol-0027   | Error |   1 |  0 |0 |
  31,536,000 |   1 |0 | 0 | File1 |   1 |0

Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread Rob Gerber
Maybe Dropbox or rclone or some combination of the two are limiting you to
1GiB file sizes?

In fact, for your rclone process I see it has a 1GB cache size limit. "
--vfs-cache-max-size 1G" I bet in the case of the dd command you did, we
filled the write cache and then dd exited. If the cache was larger or the
input command was rate limited, we might not have that issue. Maybe if
bacula backups took longer in some cases you wouldn't run into this
problem. Bacula does have a bandwidth rate limit feature, but I'd work on
the cache size or a more graceful failure mode first (like "is cache full?
Make bacula wait a while" - admittedly something I don't know to be
possible).


By default, Bacula's file based backup writes to file volumes. Think of it
as being like writing the backed up field and directories to tar or zip
files - the files and directories that are backed up by bacula are stored
in single large archive files, using Bacula's own file format.
Conceptually, Bacula isn't using the tar format, but the bacula file
writers (by default) are using something like a tar file. I think it's done
this way because way back when Kern started developing bacula, the original
destination for the backups was a tape drive. Later hard drives became
cheaper and Kern realized that bacula could also write to "file volumes"
that were stored on a hard drive. This means some customers who couldn't
afford a tape drive but could afford a larger hard drive could use bacula.
Bill recently mentioned that bacula can write backed up files and
directories to some cloud storage solutions directly, so I think the file
volume method isn't used in every case by bacula, but without special
configuration on your part bacula is probably using these file volumes as
described above.

Please do the following, probably as root:
sudo find / |grep -i Vol-0

Also please do
find /mnt/MylesDearDropBox/Backup/bacula/archive/

and please do
df -h

Robert Gerber
402-237-8692
r...@craeon.net

On Mon, Dec 4, 2023, 1:58 PM MylesDearBusiness  wrote:

> Hi, Rob,
>
> Thanks for the response.
>
> 1.
> I'm only using 25% of my 2TB Dropbox account, so I don't expect storage
> to be full.
>
> This particular cloud server is tiny, just a single CPU, 50GB storage,
> 2GB RAM.
>
> The biggest file I managed to write successfully to my rclone/Dropbox
> mount is 1GB:
>
> When I tried to write a bigger file, I got an "out of memory" error, in
> hindsight I suppose this was to be expected.
> I'm trying to keep costs down by renting only a very small cloud machine
> until such time I need the capacity increase.
>
> root@c1:~# dd if=/dev/urandom
> of=/mnt/MylesDearDropBox/Backup/someuniquefilename.img bs=1G count=1
> 1+0 records in
> 1+0 records out
> 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 9.12953 s, 118 MB/s
> root@c1:~# ls -l /mnt/MylesDearDropBox/Backup/someuniquefilename.img
> -rw-r--r-- 1 root root 1073741824 Dec  4 19:31
> /mnt/MylesDearDropBox/Backup/someuniquefilename.img
> root@c1:~#
>
>
> so I'll tune down my bacula director config for max file size of 1G.
>
> 2. I'm still confused by what exactly "Vol-xxx" is supposed to be, I see
> there are config settings for setting this name, but I only create the
> device files MylesMpwrware and point to them in in bacula-sd
> configuration as "Archive Device".  Should I also be creating the
> "Vol-xxx" files as well?   I did see the first of my "Archive Device"
> files filling up:
>
> root@c1:~# ls -l /mnt/MylesDearDropBox/Backup/bacula/archive/
> total 20971544
> -rw-r--r-- 1 root root 21474860756 Dec  4 03:27 MylesMpwrware1
> -rw-r--r-- 1 root root   0 Dec  4 03:04 MylesMpwrware1.l
> -rw-r--r-- 1 root root   0 Dec  4 01:00 MylesMpwrware2
> root@c1:~#
>
>
> I'm sure with a little more banging my head against the wall things will
> start to make sense.
>
> Thanks,
>
> 
>
> On 2023-12-04 2:26 p.m., Rob Gerber wrote:
> > dd if=/dev/urandom
> > of=/mnt/yourdropboxmountpoint/someuniquefilename.img bs=50G count=1
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread Rob Gerber
Myles,

Some thoughts (apologies if I missed something obvious in your GitHub post):

1. I recommend testing your setup to verify that a 50gb file can be stored
the way you think it can. Maybe storage is full. Maybe it is rate limiting
you. Maybe there is a maximum file size set somewhere. To test for this and
maybe some other possibilities, as whichever appropriate user, try writing
a (mostly true random) 50gb file to the Dropbox storage. In my mind
randomness is important here because it will evade any possible
compression, or deduplication.
dd if=/dev/urandom of=/mnt/yourdropboxmountpoint/someuniquefilename.img
bs=50G count=1

WARNING: DD WILL SILENTLY OVERWRITE ANY TARGET SPECIFIED UNDER "of="! BE
CERTAIN THAT DD'S OUTPUT TARGET IS UNIQUE/DOESN'T EXIST/ISN'T IMPORTANT!

2. Is Vol-001 really stored where you think it is? Please post an 'ls -lah'
of the storage location where Vol-001 is stored.

If the volume isn't being stored where you told Bacula to store it, has
Bacula been restarted since you last told it where to store the volume?
Maybe Bacula hasn't loaded the new configuration.

Robert Gerber
402-237-8692
r...@craeon.net

On Mon, Dec 4, 2023, 10:43 AM MylesDearBusiness via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> Hello again,
>
> I'm using a cloud server with rclone / Dropbox back end (which is working).
>
> I'm having trouble with a stuck Bacula run.  I have ample storage space
> but Bacula appears to be having trouble creating additional volumes.  I
> have one volume created, which was sized to a maximum of 50G, but
> appears to have bottomed out at around 30G.
>
> I want to be able to back up my entire server without any blockages, and
> to save multiple daily/weekly/monthly backups.
>
> As I've been receiving "message too long" errors from the mailing list
> server, I have placed most of the details in the following link (sorry
> for the inconvenience) :
>
> https://gist.github.com/mdear/1f15e51584d17d070cb13290a48419d7
>
> Can you help me get unstuck ?  Any concepts I'm missing?  Any
> extra/missing configuration ?
>
>
> Thanks,
>
> 
> >
>
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread MylesDearBusiness via Bacula-users
Well, I have one file in my Dropbox that is 29.3 GB in length and that synced 
around to all my client machines without problem.



On 2023-12-04 5:31 p.m., Chris Wilkinson wrote:

> Does Dropbox have a file size upload limit?
>
> -Chris-
>
> On Mon, 4 Dec 2023, 22:23 MylesDearBusiness via Bacula-users, 
>  wrote:
>
>> Ok, here goes ...
>>
>> root@c1:~# find / -path /mnt -prune -o -type f -print | grep "Vol-0"
>> root@c1:~#
>>
>> root@c1:~# df -h
>> Filesystem Size Used Avail Use% Mounted on
>> udev 941M 0 941M 0% /dev
>> tmpfs 198M 1.6M 196M 1% /run
>> /dev/vda1 49G 19G 30G 39% /
>> tmpfs 986M 20K 986M 1% /dev/shm
>> tmpfs 5.0M 0 5.0M 0% /run/lock
>> tmpfs 986M 0 986M 0% /sys/fs/cgroup
>> /dev/loop0 9.7M 9.7M 0 100% /snap/canonical-livepatch/246
>> /dev/loop1 9.9M 9.9M 0 100% /snap/canonical-livepatch/248
>> /dev/loop2 74M 74M 0 100% /snap/core22/864
>> /dev/loop3 43M 43M 0 100% /snap/doctl/1402
>> /dev/loop4 106M 106M 0 100% /snap/core/16091
>> /dev/loop5 92M 92M 0 100% /snap/lxd/24061
>> /dev/loop6 64M 64M 0 100% /snap/core20/1974
>> /dev/loop7 43M 43M 0 100% /snap/doctl/1445
>> /dev/vda15 105M 6.1M 99M 6% /boot/efi
>> /dev/loop8 41M 41M 0 100% /snap/snapd/20092
>> /dev/loop9 68M 68M 0 100% /snap/lxd/22753
>> /dev/loop10 106M 106M 0 100% /snap/core/16202
>> /dev/loop12 41M 41M 0 100% /snap/snapd/20290
>> /dev/loop11 2.1G 188K 2.0G 1% /tmp
>> /dev/loop13 64M 64M 0 100% /snap/core20/2015
>> tmpfs 198M 0 198M 0% /run/user/1000
>> MylesDearDropBox: 2.1T 651G 1.4T 32% /mnt/MylesDearDropBox
>> root@c1:~#
>>
>> root@c1:~# find /mnt/MylesDearDropBox/Backup/bacula/archive/
>> /mnt/MylesDearDropBox/Backup/bacula/archive/
>> /mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1
>> /mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1.l
>> /mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware2
>> root@c1:~#
>>
>> I searched through the entire Dropbox directory and no files with pattern 
>> "Vol-" were found.
>>
>> Best,
>>
>> 
>>
>> On 2023-12-04 4:14 p.m., Rob Gerber wrote:
>>> Maybe Dropbox or rclone or some combination of the two are limiting > you 
>>> to 1GiB file sizes? > > In fact, for your rclone process I see it has a 1GB 
>>> cache size limit. > " --vfs-cache-max-size 1G" I bet in the case of the dd 
>>> command you > did, we filled the write cache and then dd exited. If the 
>>> cache was > larger or the input command was rate limited, we might not have 
>>> that > issue. Maybe if bacula backups took longer in some cases you 
>>> wouldn't > run into this problem. Bacula does have a bandwidth rate limit > 
>>> feature, but I'd work on the cache size or a more graceful failure > mode 
>>> first (like "is cache full? Make bacula wait a while" - > admittedly 
>>> something I don't know to be possible). > > > By default, Bacula's file 
>>> based backup writes to file volumes. Think > of it as being like writing 
>>> the backed up field and directories to > tar or zip files - the files and 
>>> directories that are backed up by > bacula are stored in single large 
>>> archive files, using Bacula's own > file format. Conceptually, Bacula isn't 
>>> using the tar format, but the > bacula file writers (by default) are using 
>>> something like a tar file. > I think it's done this way because way back 
>>> when Kern started > developing bacula, the original destination for the 
>>> backups was a > tape drive. Later hard drives became cheaper and Kern 
>>> realized that > bacula could also write to "file volumes" that were stored 
>>> on a hard > drive. This means some customers who couldn't afford a tape 
>>> drive but > could afford a larger hard drive could use bacula. Bill 
>>> recently > mentioned that bacula can write backed up files and directories 
>>> to > some cloud storage solutions directly, so I think the file volume > 
>>> method isn't used in every case by bacula, but without special > 
>>> configuration on your part bacula is probably using these file > volumes as 
>>> described above. > > Please do the following, probably as root: sudo find / 
>>> |grep -i > Vol-0 > > Also please do find 
>>> /mnt/MylesDearDropBox/Backup/bacula/archive/ > > and please do df -h > > 
>>> Robert Gerber 402-237-8692 r...@craeon.net 
>>> [](mailto:r...@craeon.net) > > On Mon, Dec 4, 2023, 
>>> 1:58 PM MylesDearBusiness [](mailto:md...@mpwrware.ca) 
>>> [](mailto:md...@mpwrware.ca)> wrote: > > Hi, Rob, 
>>> > > Thanks for the response. > > 1. I'm only using 25% of my 2TB Dropbox 
>>> account, so I don't expect > storage to be full. > > This particular cloud 
>>> server is tiny, just a single CPU, 50GB > storage, 2GB RAM. > > The biggest 
>>> file I managed to write successfully to my > rclone/Dropbox mount is 1GB: > 
>>> > When I tried to write a bigger file, I got an "out of memory" error, > in 
>>> hindsight I suppose this was to be expected. I'm trying to keep > costs 
>>> down by renting only a very small cloud machine until such time > I need 
>>> the 

Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread Chris Wilkinson
Does Dropbox have a file size upload limit?

-Chris-

On Mon, 4 Dec 2023, 22:23 MylesDearBusiness via Bacula-users, <
bacula-users@lists.sourceforge.net> wrote:

>
>
> Ok, here goes ...
>
>
> root@c1:~# find /  -path /mnt -prune  -o -type f -print | grep "Vol-0"
> root@c1:~#
>
>
> root@c1:~# df -h
> Filesystem Size  Used Avail Use% Mounted on
> udev   941M 0  941M   0% /dev
> tmpfs  198M  1.6M  196M   1% /run
> /dev/vda1   49G   19G   30G  39% /
> tmpfs  986M   20K  986M   1% /dev/shm
> tmpfs  5.0M 0  5.0M   0% /run/lock
> tmpfs  986M 0  986M   0% /sys/fs/cgroup
> /dev/loop0 9.7M  9.7M 0 100% /snap/canonical-livepatch/246
> /dev/loop1 9.9M  9.9M 0 100% /snap/canonical-livepatch/248
> /dev/loop2  74M   74M 0 100% /snap/core22/864
> /dev/loop3  43M   43M 0 100% /snap/doctl/1402
> /dev/loop4 106M  106M 0 100% /snap/core/16091
> /dev/loop5  92M   92M 0 100% /snap/lxd/24061
> /dev/loop6  64M   64M 0 100% /snap/core20/1974
> /dev/loop7  43M   43M 0 100% /snap/doctl/1445
> /dev/vda15 105M  6.1M   99M   6% /boot/efi
> /dev/loop8  41M   41M 0 100% /snap/snapd/20092
> /dev/loop9  68M   68M 0 100% /snap/lxd/22753
> /dev/loop10106M  106M 0 100% /snap/core/16202
> /dev/loop12 41M   41M 0 100% /snap/snapd/20290
> /dev/loop112.1G  188K  2.0G   1% /tmp
> /dev/loop13 64M   64M 0 100% /snap/core20/2015
> tmpfs  198M 0  198M   0% /run/user/1000
> MylesDearDropBox:  2.1T  651G  1.4T  32% /mnt/MylesDearDropBox
> root@c1:~#
>
> root@c1:~# find /mnt/MylesDearDropBox/Backup/bacula/archive/
> /mnt/MylesDearDropBox/Backup/bacula/archive/
> /mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1
> /mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1.l
> /mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware2
> root@c1:~#
>
>
> I searched through the entire Dropbox directory and no files with pattern
> "Vol-" were found.
>
>
> Best,
>
> 
>
> On 2023-12-04 4:14 p.m., Rob Gerber wrote:
> > Maybe Dropbox or rclone or some combination of the two are limiting >
> you to 1GiB file sizes? > > In fact, for your rclone process I see it has a
> 1GB cache size limit. > " --vfs-cache-max-size 1G" I bet in the case of the
> dd command you > did, we filled the write cache and then dd exited. If the
> cache was > larger or the input command was rate limited, we might not have
> that > issue. Maybe if bacula backups took longer in some cases you
> wouldn't > run into this problem. Bacula does have a bandwidth rate limit >
> feature, but I'd work on the cache size or a more graceful failure > mode
> first (like "is cache full? Make bacula wait a while" - > admittedly
> something I don't know to be possible). > > > By default, Bacula's file
> based backup writes to file volumes. Think > of it as being like writing
> the backed up field and directories to > tar or zip files - the files and
> directories that are backed up by > bacula are stored in single large
> archive files, using Bacula's own > file format. Conceptually, Bacula isn't
> using the tar format, but the > bacula file writers (by default) are using
> something like a tar file. > I think it's done this way because way back
> when Kern started > developing bacula, the original destination for the
> backups was a > tape drive. Later hard drives became cheaper and Kern
> realized that > bacula could also write to "file volumes" that were stored
> on a hard > drive. This means some customers who couldn't afford a tape
> drive but > could afford a larger hard drive could use bacula. Bill
> recently > mentioned that bacula can write backed up files and directories
> to > some cloud storage solutions directly, so I think the file volume >
> method isn't used in every case by bacula, but without special >
> configuration on your part bacula is probably using these file > volumes as
> described above. > > Please do the following, probably as root: sudo find /
> |grep -i > Vol-0 > > Also please do find
> /mnt/MylesDearDropBox/Backup/bacula/archive/ > > and please do df -h > >
> Robert Gerber 402-237-8692 r...@craeon.net 
>  > > On Mon, Dec 4, 2023, 1:58 PM MylesDearBusiness 
>  >   >
> wrote: > > Hi, Rob, > > Thanks for the response. > > 1. I'm only using 25%
> of my 2TB Dropbox account, so I don't expect > storage to be full. > > This
> particular cloud server is tiny, just a single CPU, 50GB > storage, 2GB
> RAM. > > The biggest file I managed to write successfully to my >
> rclone/Dropbox mount is 1GB: > > When I tried to write a bigger file, I got
> an "out of memory" error, > in hindsight I suppose this was to be expected.
> I'm trying to keep > costs down by renting only a very small cloud machine
> until such time > I need the capacity increase. > > root@c1:~# 

Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread MylesDearBusiness via Bacula-users
Hi, Rob,

Thanks for the response.

1.
I'm only using 25% of my 2TB Dropbox account, so I don't expect storage 
to be full.

This particular cloud server is tiny, just a single CPU, 50GB storage, 
2GB RAM.

The biggest file I managed to write successfully to my rclone/Dropbox 
mount is 1GB:

When I tried to write a bigger file, I got an "out of memory" error, in 
hindsight I suppose this was to be expected.
I'm trying to keep costs down by renting only a very small cloud machine 
until such time I need the capacity increase.

root@c1:~# dd if=/dev/urandom 
of=/mnt/MylesDearDropBox/Backup/someuniquefilename.img bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 9.12953 s, 118 MB/s
root@c1:~# ls -l /mnt/MylesDearDropBox/Backup/someuniquefilename.img
-rw-r--r-- 1 root root 1073741824 Dec  4 19:31 
/mnt/MylesDearDropBox/Backup/someuniquefilename.img
root@c1:~#


so I'll tune down my bacula director config for max file size of 1G.

2. I'm still confused by what exactly "Vol-xxx" is supposed to be, I see 
there are config settings for setting this name, but I only create the 
device files MylesMpwrware and point to them in in bacula-sd 
configuration as "Archive Device".  Should I also be creating the 
"Vol-xxx" files as well?   I did see the first of my "Archive Device" 
files filling up:

root@c1:~# ls -l /mnt/MylesDearDropBox/Backup/bacula/archive/
total 20971544
-rw-r--r-- 1 root root 21474860756 Dec  4 03:27 MylesMpwrware1
-rw-r--r-- 1 root root   0 Dec  4 03:04 MylesMpwrware1.l
-rw-r--r-- 1 root root   0 Dec  4 01:00 MylesMpwrware2
root@c1:~#


I'm sure with a little more banging my head against the wall things will 
start to make sense.

Thanks,



On 2023-12-04 2:26 p.m., Rob Gerber wrote:
> dd if=/dev/urandom 
> of=/mnt/yourdropboxmountpoint/someuniquefilename.img bs=50G count=1 



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread MylesDearBusiness via Bacula-users
Ok, here goes ...

root@c1:~# find / -path /mnt -prune -o -type f -print | grep "Vol-0"
root@c1:~#

root@c1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 941M 0 941M 0% /dev
tmpfs 198M 1.6M 196M 1% /run
/dev/vda1 49G 19G 30G 39% /
tmpfs 986M 20K 986M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 986M 0 986M 0% /sys/fs/cgroup
/dev/loop0 9.7M 9.7M 0 100% /snap/canonical-livepatch/246
/dev/loop1 9.9M 9.9M 0 100% /snap/canonical-livepatch/248
/dev/loop2 74M 74M 0 100% /snap/core22/864
/dev/loop3 43M 43M 0 100% /snap/doctl/1402
/dev/loop4 106M 106M 0 100% /snap/core/16091
/dev/loop5 92M 92M 0 100% /snap/lxd/24061
/dev/loop6 64M 64M 0 100% /snap/core20/1974
/dev/loop7 43M 43M 0 100% /snap/doctl/1445
/dev/vda15 105M 6.1M 99M 6% /boot/efi
/dev/loop8 41M 41M 0 100% /snap/snapd/20092
/dev/loop9 68M 68M 0 100% /snap/lxd/22753
/dev/loop10 106M 106M 0 100% /snap/core/16202
/dev/loop12 41M 41M 0 100% /snap/snapd/20290
/dev/loop11 2.1G 188K 2.0G 1% /tmp
/dev/loop13 64M 64M 0 100% /snap/core20/2015
tmpfs 198M 0 198M 0% /run/user/1000
MylesDearDropBox: 2.1T 651G 1.4T 32% /mnt/MylesDearDropBox
root@c1:~#

root@c1:~# find /mnt/MylesDearDropBox/Backup/bacula/archive/
/mnt/MylesDearDropBox/Backup/bacula/archive/
/mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1
/mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1.l
/mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware2
root@c1:~#

I searched through the entire Dropbox directory and no files with pattern 
"Vol-" were found.

Best,



On 2023-12-04 4:14 p.m., Rob Gerber wrote:
> Maybe Dropbox or rclone or some combination of the two are limiting > you to 
> 1GiB file sizes? > > In fact, for your rclone process I see it has a 1GB 
> cache size limit. > " --vfs-cache-max-size 1G" I bet in the case of the dd 
> command you > did, we filled the write cache and then dd exited. If the cache 
> was > larger or the input command was rate limited, we might not have that > 
> issue. Maybe if bacula backups took longer in some cases you wouldn't > run 
> into this problem. Bacula does have a bandwidth rate limit > feature, but I'd 
> work on the cache size or a more graceful failure > mode first (like "is 
> cache full? Make bacula wait a while" - > admittedly something I don't know 
> to be possible). > > > By default, Bacula's file based backup writes to file 
> volumes. Think > of it as being like writing the backed up field and 
> directories to > tar or zip files - the files and directories that are backed 
> up by > bacula are stored in single large archive files, using Bacula's own > 
> file format. Conceptually, Bacula isn't using the tar format, but the > 
> bacula file writers (by default) are using something like a tar file. > I 
> think it's done this way because way back when Kern started > developing 
> bacula, the original destination for the backups was a > tape drive. Later 
> hard drives became cheaper and Kern realized that > bacula could also write 
> to "file volumes" that were stored on a hard > drive. This means some 
> customers who couldn't afford a tape drive but > could afford a larger hard 
> drive could use bacula. Bill recently > mentioned that bacula can write 
> backed up files and directories to > some cloud storage solutions directly, 
> so I think the file volume > method isn't used in every case by bacula, but 
> without special > configuration on your part bacula is probably using these 
> file > volumes as described above. > > Please do the following, probably as 
> root: sudo find / |grep -i > Vol-0 > > Also please do find 
> /mnt/MylesDearDropBox/Backup/bacula/archive/ > > and please do df -h > > 
> Robert Gerber 402-237-8692 r...@craeon.net 
> [](mailto:r...@craeon.net) > > On Mon, Dec 4, 2023, 
> 1:58 PM MylesDearBusiness [](mailto:md...@mpwrware.ca) 
> [](mailto:md...@mpwrware.ca)> wrote: > > Hi, Rob, > 
> > Thanks for the response. > > 1. I'm only using 25% of my 2TB Dropbox 
> account, so I don't expect > storage to be full. > > This particular cloud 
> server is tiny, just a single CPU, 50GB > storage, 2GB RAM. > > The biggest 
> file I managed to write successfully to my > rclone/Dropbox mount is 1GB: > > 
> When I tried to write a bigger file, I got an "out of memory" error, > in 
> hindsight I suppose this was to be expected. I'm trying to keep > costs down 
> by renting only a very small cloud machine until such time > I need the 
> capacity increase. > > root@c1:~# dd if=/dev/urandom > 
> of=/mnt/MylesDearDropBox/Backup/someuniquefilename.img bs=1G count=1 > 1+0 
> records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) > copied, 
> 9.12953 s, 118 MB/s root@c1:~# ls -l > 
> /mnt/MylesDearDropBox/Backup/someuniquefilename.img -rw-r--r-- 1 root > root 
> 1073741824 Dec 4 19:31 > /mnt/MylesDearDropBox/Backup/someuniquefilename.img 
> root@c1:~# > > > so I'll tune down my bacula director config for max file 
> size of 1G. > > 2. I'm still confused by what 

[Bacula-users] Any suggestions for fail2ban jail for Bacula Director ?

2023-12-04 Thread MylesDearBusiness via Bacula-users
Hello,

I just installed Bacula director on one of my cloud servers.

I have set the firewall to allow traffic in/out of port 9101 to allow it 
to be utilized to orchestrate remote backups as well.

What I want to do is to identify the potential attack surface and create 
a fail2ban jail configuration.

Does anybody have an exemplar that I can work with?

Also, is there a way to simulate a failed login attempt with a tool such 
as netcat?  I could possibly use PostMan and dig into the REST API spec, 
but I was hoping the community would be able to shortcut this effort.

What say you?

Thanks,






___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Please help me to unblock my backup run

2023-12-04 Thread MylesDearBusiness via Bacula-users
Hello again,

I'm using a cloud server with rclone / Dropbox back end (which is working).

I'm having trouble with a stuck Bacula run.  I have ample storage space 
but Bacula appears to be having trouble creating additional volumes.  I 
have one volume created, which was sized to a maximum of 50G, but 
appears to have bottomed out at around 30G.

I want to be able to back up my entire server without any blockages, and 
to save multiple daily/weekly/monthly backups.

As I've been receiving "message too long" errors from the mailing list 
server, I have placed most of the details in the following link (sorry 
for the inconvenience) :

https://gist.github.com/mdear/1f15e51584d17d070cb13290a48419d7

Can you help me get unstuck ?  Any concepts I'm missing?  Any 
extra/missing configuration ?


Thanks,


>



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users