Re: [Bacula-users] Troubles with AWS [Possibly solved]

2023-01-18 Thread Chris Wilkinson
This response from Backblaze. The SD would have to retry but that didn't
seem to happen in the example I posted, it just moved on to the next part
file.

"The "No Tomes Available" error refers to a potential and common http
500/503 response from our api. This error indicates that after the client
authorized, and was directed to a storage location, the location then
reported it was full/unavailable and a new upload location assignment is
needed.

You can read more about that in this blog post.
https://www.backblaze.com/blog/b2-503-500-server-error/

Typically retrying the upload after waiting up to 30 seconds will result in
a successful upload.

Please let us know if there's anything further we can assist with."

On Tue, 17 Jan 2023, 4:28 pm Chris Wilkinson, 
wrote:

> Backblaze provides an integration checklist for developers.
>
> https://www.backblaze.com/b2/docs/integration_checklist.html
>
> I’m wondering if anyone could comment on whether the S3 driver is
> compliant with this?
>
> I’ve asked the question of Backblaze what the S3 driver is supposed to do
> when this error occurs and will report the response.
>
> Best
> -Chris-
>
>
>
>
> On 17 Jan 2023, at 15:12, Heitor Faria  wrote:
>
> Hello Chris,
>
> This new upload error occurred today.
>
> "No tomes available" is expected to occur sometimes and the client is
> expected to retry. This doesn't seem to be working.
>
> Could this be a bug with the S3 driver?
>
> A while ago, Backblaze didn't have S3 support, having to rely on adaptors
> such as mini.io. Ref.:
> https://www.backblaze.com/blog/how-to-use-minio-with-b2-cloud-storage/
> Nowadays, I suppose they already deployed native S3 integration, but we
> can never be sure it is 100% compliant with the S3 standard. I found some
> "no tomes available" errors reports from other applications, as follows.
> "does S3QL stop or is this just a warning? Looks like a "tome" is a
> storage unit (
> https://help.backblaze.com/hc/en-us/articles/218485257-B2-Resiliency-Durability-and-Availability
> ) and B2 had none available there for a while." Ref.:
> https://groups.google.com/g/s3ql/c/H1EGYyw6mWs
> That said, I think you should contact the Backblaze support first, before
> putting efforts in the Bacula SD debug.
>
> Rgds.
> --
>
> MSc Heitor Faria (Miami/USA)
> Bacula LATAM CIO
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
> 
>  
> bacula.lat | bacula.com.br 
>
>
>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Troubles with AWS [Possibly solved]

2023-01-17 Thread Chris Wilkinson
Backblaze provides an integration checklist for developers.

https://www.backblaze.com/b2/docs/integration_checklist.html 


I’m wondering if anyone could comment on whether the S3 driver is compliant 
with this?

I’ve asked the question of Backblaze what the S3 driver is supposed to do when 
this error occurs and will report the response.

Best
-Chris-




> On 17 Jan 2023, at 15:12, Heitor Faria  wrote:
> 
> Hello Chris,
> This new upload error occurred today.
> 
> "No tomes available" is expected to occur sometimes and the client is 
> expected to retry. This doesn't seem to be working. 
> 
> Could this be a bug with the S3 driver?
> A while ago, Backblaze didn't have S3 support, having to rely on adaptors 
> such as mini.io. Ref.: 
> https://www.backblaze.com/blog/how-to-use-minio-with-b2-cloud-storage/ 
> 
> Nowadays, I suppose they already deployed native S3 integration, but we can 
> never be sure it is 100% compliant with the S3 standard. I found some "no 
> tomes available" errors reports from other applications, as follows.
> "does S3QL stop or is this just a warning? Looks like a "tome" is a storage 
> unit ( 
> https://help.backblaze.com/hc/en-us/articles/218485257-B2-Resiliency-Durability-and-Availability
>  ) and B2 had none available there for a while." Ref.: 
> https://groups.google.com/g/s3ql/c/H1EGYyw6mWs 
> 
> That said, I think you should contact the Backblaze support first, before 
> putting efforts in the Bacula SD debug.
> 
> Rgds.
> -- 
> 
> MSc Heitor Faria (Miami/USA)
> Bacula LATAM CIO
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
>     
>  
> bacula.lat  | bacula.com.br 
> 
> 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Troubles with AWS [Possibly solved]

2023-01-16 Thread Chris Wilkinson
I don't specify "max concurrent uploads" so it defaults to 3 per manual. I
have differing max part size for the jobs but the ones that frequently fail
are 500Mb.

I'm bandwidth limited to 6MB/sec so I don't see any benefit from trying to
run uploads concurrently. My largest backup is 350GB that takes around 20h.

Let us know what values seem to work best.

Chris


On Mon, 16 Jan 2023, 8:05 am Andrea Venturoli,  wrote:

> On 1/13/23 18:26, Andrea Venturoli wrote:
>
> > Then I upgraded every package (don't think it matters, still no harm)
> > and now I'm trying with "cloud upload" manually. I'll see how this goes.
>
> Still troublesome, but now the error is more verbose:
>
> > 3000 Cloud Upload: Full0116/part.195   state=error   size=99.99 MB
> duration=19871s msg=S3_put_object ERR=RequestTorrentOfBucketError CURL
> Effective URL: https://s3.eu-south-1.amazonaws.com/Full0116/part.195 CURL
> OS Error: 54 CURL Effective URL:
> https://s3.eu-south-1.amazonaws.com/Full0116/part.195 CURL OS Error: 54
> retry=10
>
> So we get "CURL OS Error: 54".
>
> If I get it right errno=54 is "ECONNRESET Connection reset by peer. A
> connection was forcibly closed by a peer.  This normally results from a
> loss of the connection on the remote socket due to a timeout or a reboot".
>
>
>
>
> So I tried lowering "Maximum Concurrent Uploads" from 20 to 1.
> Now it work properly, although uploading a full backup will take some 2
> days...
> I'll try fiddling again with with intermediate values.
>
> Another thing I though of would be to lower "maximum part size",
> currently at 100MB. Of course this would not work on the already taken
> backup, I think.
>
>   bye & Thanks
> av.
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Troubles with AWS [Possibly solved]

2023-01-16 Thread Andrea Venturoli

On 1/13/23 18:26, Andrea Venturoli wrote:

Then I upgraded every package (don't think it matters, still no harm) 
and now I'm trying with "cloud upload" manually. I'll see how this goes.


Still troublesome, but now the error is more verbose:


3000 Cloud Upload: Full0116/part.195   state=error   size=99.99 MB 
duration=19871s msg=S3_put_object ERR=RequestTorrentOfBucketError CURL 
Effective URL: https://s3.eu-south-1.amazonaws.com/Full0116/part.195 CURL OS 
Error: 54 CURL Effective URL: 
https://s3.eu-south-1.amazonaws.com/Full0116/part.195 CURL OS Error: 54  
retry=10


So we get "CURL OS Error: 54".

If I get it right errno=54 is "ECONNRESET Connection reset by peer. A 
connection was forcibly closed by a peer.  This normally results from a 
loss of the connection on the remote socket due to a timeout or a reboot".





So I tried lowering "Maximum Concurrent Uploads" from 20 to 1.
Now it work properly, although uploading a full backup will take some 2 
days...

I'll try fiddling again with with intermediate values.

Another thing I though of would be to lower "maximum part size", 
currently at 100MB. Of course this would not work on the already taken 
backup, I think.


 bye & Thanks
av.


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users