I am testing bareos-19-2.7-2 on CentOS Linux release 7.4.1708, including 
the bareos-storage-droplet RPM.

We want to evaluate how well we can backup to the cloud, or to a ceph 
server of our own.  I have tried both, and both fail.

The credentials are valid, and were tested independently.  (Unless droplet 
wants them specially encoded??)

Since other people have been able to get this working, I assume there's 
some pilot error in the configuration.  (Other backup tests worked.)

I would be grateful for any suggestions as to what I'm doing wrong.
jim

To illustrate, this is the aws configuration:

        /etc/bareos/bareos-sd.d/device/awstest.conf
Device {
  Name = awstest
  Media Type = S3_Object2
  Archive Device = "bareostestuw"       # This doesn't work when I use 
"bareos-test-uw" either
  # testing:
  Device Options = 
"profile=/etc/bareos/bareos-sd.d/device/droplet/awstest.profile,bucket=bareos-test-uw,chunksize=100M,iothreads=0,retries=1"
  Device Type = droplet
  Label Media = yes                    # lets Bareos label unlabeled media
  Random Access = yes
  Automatic Mount = yes                # when device opened, read it
  Removable Media = no
  Always Open = no
  Description = "S3 device"
  Maximum Concurrent Jobs = 1
}

        /etc/bareos/bareos-sd.d/device/droplet/awstest.profile
host = bareos-test-uw.s3.amazonaws.com
use_https = true
backend = s3
aws_region = us-east-2
aws_auth_sign_version = 4
access_key = "redacted"
secret_key = "redacted"
pricing_dir = ""


        /etc/bareos/bareos-dir.d/storage/awstest.conf
Storage {
  Name = awstest
  Address = bareos-test-sd.icecube.wisc.edu       # N.B. Use a fully 
qualified name here (do not use "localhost" here).
  Password = "redacted"
  Device = awstest
  Media Type = S3_Object2
}

Etc.

The job fails in WriteNewVolumeLabel with
10-Nov 09:00 bareos-sd: ERROR in backends/droplet_device.cc:111 error: 
src/conn.c:389: init_ssl_conn: SSL connect error: 0: 0
10-Nov 09:00 bareos-sd: ERROR in backends/droplet_device.cc:111 error: 
src/conn.c:392: init_ssl_conn: SSL certificate verification status: 0: ok
10-Nov 09:00 bareos-sd JobId 131: Warning: stored/label.cc:390 Open device 
"awstest" (bareostestuw) Volume "cfull0010" failed: ERR=stored/dev.cc:747 
Could not open: bareostestuw/cfull0010, ERR=Success


============

When I test this on our ceph system, the result is quite similar, and I 
have access to the system logs.
The ceph server logs include "HEAD", suggesting that it is failing when 
trying to get bucket information.  The error return logged is 
403--something is forbidden


2020-11-09 15:15:42.020 7f84290ea700  1 ====== starting new request 
req=0x7f84290e37f0 =====
2020-11-09 15:15:42.020 7f84290ea700  1 ====== req done req=0x7f84290e37f0 
op status=0 http_status=403 latency=0s ======
2020-11-09 15:15:42.020 7f84290ea700  1 civetweb: 0x55d654014000: 
10.128.108.133 - - [09/Nov/2020:15:15:41 -0600] "HEAD / HTTP/1.1" 403 231 - 
-

The bareos messages say it is failing in both WriteNewVolumeLabel and in 
MountNextWriteVolume.
Warning: stored/label.cc:390 Open device "uwcephS3" (baretest) Volume 
"cfull0009" failed: ERR=stored/dev.cc:747 Could not open: 
baretest/cfull0009, ERR=Success
Warning: stored/label.cc:390 Open device "uwcephS3" (baretest) Volume 
"cfull0009" failed: ERR=stored/dev.cc:747 Could not open: 
baretest/cfull0009, ERR=Success
Warning: stored/mount.cc:275 Open device "uwcephS3" (baretest) Volume 
"cfull0009" failed: ERR=stored/dev.cc:747 Could not open: 
baretest/cfull0009, ERR=Success

-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/bareos-users/a7cb1354-547b-4921-9c56-0995bf673eaan%40googlegroups.com.

Reply via email to