Ok, here goes ...

root@c1:~# find / -path /mnt -prune -o -type f -print | grep "Vol-0"
root@c1:~#

root@c1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 941M 0 941M 0% /dev
tmpfs 198M 1.6M 196M 1% /run
/dev/vda1 49G 19G 30G 39% /
tmpfs 986M 20K 986M 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 986M 0 986M 0% /sys/fs/cgroup
/dev/loop0 9.7M 9.7M 0 100% /snap/canonical-livepatch/246
/dev/loop1 9.9M 9.9M 0 100% /snap/canonical-livepatch/248
/dev/loop2 74M 74M 0 100% /snap/core22/864
/dev/loop3 43M 43M 0 100% /snap/doctl/1402
/dev/loop4 106M 106M 0 100% /snap/core/16091
/dev/loop5 92M 92M 0 100% /snap/lxd/24061
/dev/loop6 64M 64M 0 100% /snap/core20/1974
/dev/loop7 43M 43M 0 100% /snap/doctl/1445
/dev/vda15 105M 6.1M 99M 6% /boot/efi
/dev/loop8 41M 41M 0 100% /snap/snapd/20092
/dev/loop9 68M 68M 0 100% /snap/lxd/22753
/dev/loop10 106M 106M 0 100% /snap/core/16202
/dev/loop12 41M 41M 0 100% /snap/snapd/20290
/dev/loop11 2.1G 188K 2.0G 1% /tmp
/dev/loop13 64M 64M 0 100% /snap/core20/2015
tmpfs 198M 0 198M 0% /run/user/1000
MylesDearDropBox: 2.1T 651G 1.4T 32% /mnt/MylesDearDropBox
root@c1:~#

root@c1:~# find /mnt/MylesDearDropBox/Backup/bacula/archive/
/mnt/MylesDearDropBox/Backup/bacula/archive/
/mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1
/mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware1.l
/mnt/MylesDearDropBox/Backup/bacula/archive/MylesMpwrware2
root@c1:~#

I searched through the entire Dropbox directory and no files with pattern 
"Vol-" were found.

Best,

<Myles>

On 2023-12-04 4:14 p.m., Rob Gerber wrote:
> Maybe Dropbox or rclone or some combination of the two are limiting > you to 
> 1GiB file sizes? > > In fact, for your rclone process I see it has a 1GB 
> cache size limit. > " --vfs-cache-max-size 1G" I bet in the case of the dd 
> command you > did, we filled the write cache and then dd exited. If the cache 
> was > larger or the input command was rate limited, we might not have that > 
> issue. Maybe if bacula backups took longer in some cases you wouldn't > run 
> into this problem. Bacula does have a bandwidth rate limit > feature, but I'd 
> work on the cache size or a more graceful failure > mode first (like "is 
> cache full? Make bacula wait a while" - > admittedly something I don't know 
> to be possible). > > > By default, Bacula's file based backup writes to file 
> volumes. Think > of it as being like writing the backed up field and 
> directories to > tar or zip files - the files and directories that are backed 
> up by > bacula are stored in single large archive files, using Bacula's own > 
> file format. Conceptually, Bacula isn't using the tar format, but the > 
> bacula file writers (by default) are using something like a tar file. > I 
> think it's done this way because way back when Kern started > developing 
> bacula, the original destination for the backups was a > tape drive. Later 
> hard drives became cheaper and Kern realized that > bacula could also write 
> to "file volumes" that were stored on a hard > drive. This means some 
> customers who couldn't afford a tape drive but > could afford a larger hard 
> drive could use bacula. Bill recently > mentioned that bacula can write 
> backed up files and directories to > some cloud storage solutions directly, 
> so I think the file volume > method isn't used in every case by bacula, but 
> without special > configuration on your part bacula is probably using these 
> file > volumes as described above. > > Please do the following, probably as 
> root: sudo find / |grep -i > Vol-0 > > Also please do find 
> /mnt/MylesDearDropBox/Backup/bacula/archive/ > > and please do df -h > > 
> Robert Gerber 402-237-8692 r...@craeon.net 
> [<mailto:r...@craeon.net>](mailto:r...@craeon.net) > > On Mon, Dec 4, 2023, 
> 1:58 PM MylesDearBusiness [<md...@mpwrware.ca >](mailto:md...@mpwrware.ca) 
> [<mailto:md...@mpwrware.ca>](mailto:md...@mpwrware.ca)> wrote: > > Hi, Rob, > 
> > Thanks for the response. > > 1. I'm only using 25% of my 2TB Dropbox 
> account, so I don't expect > storage to be full. > > This particular cloud 
> server is tiny, just a single CPU, 50GB > storage, 2GB RAM. > > The biggest 
> file I managed to write successfully to my > rclone/Dropbox mount is 1GB: > > 
> When I tried to write a bigger file, I got an "out of memory" error, > in 
> hindsight I suppose this was to be expected. I'm trying to keep > costs down 
> by renting only a very small cloud machine until such time > I need the 
> capacity increase. > > root@c1:~# dd if=/dev/urandom > 
> of=/mnt/MylesDearDropBox/Backup/someuniquefilename.img bs=1G count=1 > 1+0 
> records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) > copied, 
> 9.12953 s, 118 MB/s root@c1:~# ls -l > 
> /mnt/MylesDearDropBox/Backup/someuniquefilename.img -rw-r--r-- 1 root > root 
> 1073741824 Dec 4 19:31 > /mnt/MylesDearDropBox/Backup/someuniquefilename.img 
> root@c1:~# > > > so I'll tune down my bacula director config for max file 
> size of 1G. > > 2. I'm still confused by what exactly "Vol-xxx" is supposed 
> to be, I > see there are config settings for setting this name, but I only > 
> create the device files MylesMpwrware<x> and point to them in in > bacula-sd 
> configuration as "Archive Device". Should I also be > creating the "Vol-xxx" 
> files as well? I did see the first of my > "Archive Device" files filling up: 
> > > root@c1:~# ls -l /mnt/MylesDearDropBox/Backup/bacula/archive/ total > 
> 20971544 -rw-r--r-- 1 root root 21474860756 Dec 4 03:27 > MylesMpwrware1 
> -rw-r--r-- 1 root root 0 Dec 4 03:04 > MylesMpwrware1.l -rw-r--r-- 1 root 
> root 0 Dec 4 01:00 > MylesMpwrware2 root@c1:~# > > > I'm sure with a little 
> more banging my head against the wall things > will start to make sense. > > 
> Thanks, > > <Myles> > > On 2023-12-04 2:26 p.m., Rob Gerber wrote: >> dd 
> if=/dev/urandom >> of=/mnt/yourdropboxmountpoint/someuniquefilename.img 
> bs=50G >> count=1 >
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to