On Saturday 2021-12-25 00:12:09 Phil Stracchino wrote:
> For the first time I'm trying to run a restore directly from a copy job,
> after a complete failure of my NAS.  I've got a second storage daemon
> temporarily installed on my workstation, I have the external disk
> chassis that holds my rotating archive copy sets attached to a
> temporary server running a clean Solaris 11.4 install, and I have the
> most recent full backup set copy set mounted.  And I'm trying to do a
> test restore of 30 or so small files.
> 
> Status dir says:
> 
>   JobId  Type Level     Files     Bytes  Name              Status
> ======================================================================
>   34562  Rest Rest          0         0  Restore           is waiting
> for a mount request
> ====
> 
> Status storage says:
> 
> 
> Reading: Full Restore job Restore JobId=34562
> Volume="ARCHIVE-20211206-14:00"
>      pool="Scratch" device="ArchiveCopy" (/arcpool) newbsr=0
>      Files=0 Bytes=0 AveBytes/sec=0 LastBytes/sec=0
>      FDReadSeqNo=7 in_msg=7 out_msg=7 fd=6
> Director connected at: 24-Dec-21 23:59
> ====
> 
> Jobs waiting to reserve a drive:
> ====
> 
> Terminated Jobs:
>   JobId  Level    Files      Bytes   Status   Finished        Name
> ===================================================================
>   34558  Rest          0         0   Cancel   24-Dec-21 23:17 Restore
>   34559  Rest          0         0   Cancel   24-Dec-21 23:45 Restore
> ====
> 
> Device status:
> 
> Device File: "ArchiveCopy" (/arcpool) is not open.
>     Device is BLOCKED waiting for mount of volume
> "ARCHIVE-20211206-14:00", Pool:        Scratch
>         Media type:  File
>     Available Space=711.4 GB
> 
> 
> And if I try to mount:
> 
> *mount
> Automatically selected Catalog: Catalog
> Using Catalog "Catalog"
> Automatically selected Storage: babylon5-archive
> 3906 File device ""ArchiveCopy" (/arcpool)" is always mounted.
> 
> 
> 
> There is one complication.  The machine running the storage daemon does
> not have enough local storage to copy the ARCHIVE-20211206-14:00 volume
> over.  And I've had no luck so far trying to get a working development
> environment onto the server that has the external chassis attached.  I
> can't attach it to the machine running the storage daemon because
> OpenZFS and Solaris ZFS zpools are incompatible.  Thank you OH SO VERY
> MUCH ORACLE.
> 
> So, the zfs filesystem containing the archive file is remotely mounted
> via NFS.

Hello Phil

> Is that what's causing the problem here?

I don't think that NFS would be a problem here assuming it's correctly
setup.
I would doublecheck ownership and permissions of files and directories.
The way how NFS maps UIDs and GIDs could also affect user access
(check option no_root_squash).

> Does anyone know a solution?

I will assume that you already know about the "copies" option and that
you need to find and use the ID of the copy job instead of the ID of a
copied job.
If that's not the case I would suggest to look into the thread
"[Bacula-users] Restoring from a copy job" started at 2021-11-29 21:01
where I successfully managed to perform a restore from a copied job.

> I don't suppose anyone has a set of Bacula 11.0.5 packages for Solaris 
> 11.4 amd64 compiled...?

I haven't been using Bacula on Solaris for quite some time so I can't
help you here.

I hope you'll find the solution.


Regards!

-- 
Josip Deanovic


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to