Kenny,
Ok. Sorry for the confusion about the bluewhale client. This is why we need
your full restore command, to see exactly what's happening..
You should not change your Backup Client at the restore, even if the
restore is dead. You should change your Restore Client. It would be nice to
inform its version too.
Regards,
On Mon, Sep 8, 2014 at 2:11 PM, Kenny Noe <knoe...@gmail.com> wrote:
> Heitor,
>
> Pardon my ignorance, but i don't follow your questions...
>
> My original client "bluewhale" experienced a terrible HD failure thus the
> need to recover. I have been changing the "client" and "where" parameters
> when walking thru the restore procedure. When the restore process runs it
> creates the requested file "mail.tar" but there are zero bytes.
>
> If you get me a bit more to go on I'll gladly share input / output for the
> request.
>
> Thanks --Kenny
>
>
>
>
> On Mon, Sep 8, 2014 at 12:57 PM, Heitor Faria <hei...@bacula.com.br>
> wrote:
>
>> Kenny,
>>
>> 1. Could you reproduce the input you used at the restore submission?
>> 2. Can you tell the client bluewhale version?
>>
>> Regards,
>>
>> On Mon, Sep 8, 2014 at 1:48 PM, Kenny Noe <knoe...@gmail.com> wrote:
>>
>>> Heitor,
>>>
>>> Hi! Thanks for the reply.... I'm utilizing a console and the Bacula
>>> console and executing a "restore" command then walking thru the prompts
>>> given. What is this "run" command? Yes, I believe I'm trying to restore
>>> the same file, but it's not working.
>>>
>>> Thoughts?
>>>
>>> Thanks --Kenny
>>>
>>>
>>> On Mon, Sep 8, 2014 at 12:41 PM, Heitor Faria <hei...@bacula.com.br>
>>> wrote:
>>>
>>>> Kenny,
>>>>
>>>> First of all, are you using the "run" command to submit a prior
>>>> configured Restore Job? I think this is not advisable, since there are
>>>> several restore variables that only the "restore" command can fetch.
>>>> Did you try to restore the same file with the restore command?
>>>>
>>>> Regards,
>>>>
>>>>
>>>> On Mon, Sep 8, 2014 at 1:28 PM, Kenny Noe <knoe...@gmail.com> wrote:
>>>>
>>>>> Dan,
>>>>>
>>>>> Thanks for the reply. I tried this this morning and still failed the
>>>>> restore. I see during the Status Storage - Storage_Bluewhale where the
>>>>> "Running Jobs section shows Files=0, Bytes=0 Bytes/sec=0. However in the
>>>>> Device Status section, Device "File_bluewhale" is mounted and the Total
>>>>> Bytes Read and Block Read go up.... Now with the simplefied config It
>>>>> seems to have lost it's Pool.
>>>>>
>>>>> I'm confused... I changed the where to restore to
>>>>> /nas/users/admin/backups and changed in the client config to remove
>>>>> the fifo headache... But it still is trying to use it...
>>>>>
>>>>> Here is the error from the log :
>>>>> 08-Sep 11:51 BS01-DIR1 JobId 12897: Start Restore Job
>>>>> Restore_mail_bluewhale.2014-09-08_11.51.55_03
>>>>> 08-Sep 11:51 BS01-DIR1 JobId 12897: Using Device "File_bluewhale"
>>>>> 08-Sep 11:51 BS01-SD1 JobId 12897: Ready to read from volume
>>>>> "mail-0386" on device "File_bluewhale" (/nas/bacula/bluewhale).
>>>>> 08-Sep 11:51 BS01-SD1 JobId 12897: Forward spacing Volume "mail-0386"
>>>>> to file:block 0:219.
>>>>> 08-Sep 12:14 BS01-SD1 JobId 12897: End of Volume at file 28 on device
>>>>> "File_bluewhale" (/nas/bacula/bluewhale), Volume "mail-0386"
>>>>> 08-Sep 12:14 BS01-SD1 JobId 12897: End of all volumes.
>>>>> 08-Sep 11:52 BS01-FD1 JobId 12897: Error: create_file.c:292 Could not
>>>>> open /nas/users/admin/backups/data/backups/mail/fifo/mail.tar:
>>>>> ERR=Interrupted system call
>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: Bacula BS01-DIR1 5.2.2 (26Nov11):
>>>>> Build OS: x86_64-unknown-linux-gnu ubuntu 11.10
>>>>> JobId: 12897
>>>>> Job: Restore_mail_bluewhale.2014-09-08_11.51.55_03
>>>>> Restore Client: besc-bs01
>>>>> Start time: 08-Sep-2014 11:51:57
>>>>> End time: 08-Sep-2014 12:14:09
>>>>> Files Expected: 1
>>>>> Files Restored: 0
>>>>> Bytes Restored: 0
>>>>> Rate: 0.0 KB/s
>>>>> FD Errors: 0
>>>>> FD termination status: OK
>>>>> SD termination status: OK
>>>>> Termination: Restore OK -- warning file count mismatch
>>>>>
>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: Begin pruning Jobs older than 15
>>>>> days .
>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: No Jobs found to prune.
>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: Begin pruning Files.
>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: No Files found to prune.
>>>>> 08-Sep 12:14 BS01-DIR1 JobId 12897: End auto prune.
>>>>>
>>>>>
>>>>> What is a file count mismatch?
>>>>>
>>>>> Here is the status during a restore :
>>>>> Connecting to Storage daemon Storage_bluewhale at 10.10.10.199:9103
>>>>>
>>>>> BS01-SD1 Version: 5.2.2 (26 November 2011) x86_64-unknown-linux-gnu
>>>>> ubuntu 11.10
>>>>> Daemon started 08-Sep-14 11:48. Jobs: run=0, running=0.
>>>>> Heap: heap=598,016 smbytes=386,922 max_bytes=405,712 bufs=947
>>>>> max_bufs=949
>>>>> Sizes: boffset_t=8 size_t=8 int32_t=4 int64_t=8 mode=0,0
>>>>>
>>>>> Running Jobs:
>>>>> Reading: Full Restore job Restore_mail_bluewhale JobId=12897
>>>>> Volume="mail-0386"
>>>>> pool="Pool_mail_bluewhale" device="File_bluewhale"
>>>>> (/nas/bacula/bluewhale)
>>>>> Files=0 Bytes=0 Bytes/sec=0
>>>>> FDReadSeqNo=6 in_msg=6 out_msg=84529 fd=6
>>>>> ====
>>>>>
>>>>> Jobs waiting to reserve a drive:
>>>>> ====
>>>>>
>>>>> Terminated Jobs:
>>>>> JobId Level Files Bytes Status Finished Name
>>>>> ===================================================================
>>>>> 12889 Incr 31 67.94 M OK 08-Sep-14 00:01
>>>>> Backup_os_besc-unixmgr01
>>>>> 12891 Full 4 501.0 M OK 08-Sep-14 00:05
>>>>> Backup_app_dev
>>>>> 12888 Incr 437 1.158 G OK 08-Sep-14 00:06
>>>>> Backup_os_besc-bs01
>>>>> 12890 Incr 0 0 Other 08-Sep-14 00:30
>>>>> Backup_os_bluewhale
>>>>> 12893 Full 0 0 Other 08-Sep-14 01:30
>>>>> Backup_mail_bluewhale
>>>>> 12884 Full 2,361,101 154.6 G OK 08-Sep-14 04:46
>>>>> Backup_os_mako
>>>>> 12892 Full 4 54.40 G OK 08-Sep-14 05:56
>>>>> Backup_app_mako
>>>>> 12894 0 0 OK 08-Sep-14 08:53
>>>>> Restore_mail_bluewhale
>>>>> 12895 0 0 OK 08-Sep-14 09:28
>>>>> Restore_mail_bluewhale
>>>>> 12896 0 0 OK 08-Sep-14 10:10
>>>>> Restore_mail_bluewhale
>>>>> ====
>>>>>
>>>>> Device status:
>>>>> Device "File_asterisk" (/nas/bacula/asterisk) is not open.
>>>>> Device "File_besc-4dvapp" (/nas/bacula/besc-4dvapp) is not open.
>>>>> Device "File_besc-bs01" (/nas/bacula/besc-bs01) is not open.
>>>>> Device "File_besc-unixmgr01" (/nas/bacula/besc-unixmgr01) is not open.
>>>>> Device "File_bluewhale" (/nas/bacula/bluewhale) is mounted with:
>>>>> Volume: mail-0386
>>>>> Pool: *unknown*
>>>>> Media type: NAS_bluewhale
>>>>> Total Bytes Read=1,121,412,096 Blocks Read=17,383
>>>>> Bytes/block=64,512
>>>>> Positioned at File=0 Block=1,121,412,275
>>>>> Device "File_demo" (/nas/bacula/demo) is not open.
>>>>> Device "File_dev" (/nas/bacula/dev) is not open.
>>>>> Device "File_mako" (/nas/bacula/mako) is not open.
>>>>> Device "File_qa" (/nas/bacula/qa) is not open.
>>>>> Device "File_qa2" (/nas/bacula/qa2) is not open.
>>>>> Device "File_smart" (/nas/bacula/smart) is not open.
>>>>> ====
>>>>>
>>>>> Used Volume status:
>>>>> mail-0386 on device "File_bluewhale" (/nas/bacula/bluewhale)
>>>>> Reader=1 writers=0 devres=0 volinuse=1
>>>>> mail-0386 read volume JobId=12897
>>>>> ====
>>>>>
>>>>> ====
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Here is my simplified client config:
>>>>>
>>>>>
>>>>> #********************************************************************************
>>>>> # bluewhale
>>>>>
>>>>> #********************************************************************************
>>>>> Client {
>>>>> Name = bluewhale
>>>>> Address = bluewhale.bnesystems.com
>>>>> Catalog = BS01-Catalog
>>>>> Password = "xxxxxxxxx"
>>>>> FileRetention = 365 days
>>>>> JobRetention = 365 days
>>>>> AutoPrune = yes
>>>>> MaximumConcurrentJobs = 1
>>>>> }
>>>>> Job {
>>>>> Name = Restore_mail_bluewhale
>>>>> FileSet = Full_mail_bluewhale
>>>>> Type = Restore
>>>>> Pool = Pool_mail_bluewhale
>>>>> Client = bluewhale
>>>>> Messages = Standard
>>>>> }
>>>>> Pool {
>>>>> Name = Pool_mail_bluewhale
>>>>> PoolType = Backup
>>>>> Storage = Storage_bluewhale
>>>>> MaximumVolumeJobs = 1
>>>>> CatalogFiles = yes
>>>>> AutoPrune = yes
>>>>> VolumeRetention = 365 days
>>>>> Recycle = yes
>>>>> LabelFormat = "mail-"
>>>>> }
>>>>> Storage {
>>>>> Name = Storage_bluewhale
>>>>> Address = 10.10.10.199
>>>>> SDPort = 9103
>>>>> Password = "xxxxxxx"
>>>>> Device = File_bluewhale
>>>>> MediaType = NAS_bluewhale
>>>>> MaximumConcurrentJobs = 1
>>>>> }
>>>>> FileSet {
>>>>> Name = Full_mail_bluewhale
>>>>> Include {
>>>>> Options {
>>>>> signature=SHA1
>>>>> # readfifo=yes
>>>>> }
>>>>> File="/mail.tar"
>>>>> }
>>>>> }
>>>>>
>>>>>
>>>>>
>>>>> Thanks for the help. I appreciate all the input.
>>>>>
>>>>> --Kenny
>>>>>
>>>>>
>>>>> On Sun, Sep 7, 2014 at 8:22 AM, Dan Langille <d...@langille.org> wrote:
>>>>>
>>>>>> I suggest removing the before & after scripts.
>>>>>>
>>>>>> --
>>>>>> Dan Langille
>>>>>> http://langille.org/
>>>>>>
>>>>>>
>>>>>> > On Sep 6, 2014, at 8:38 PM, Kenny Noe <knoe...@gmail.com> wrote:
>>>>>> >
>>>>>> > Dan,
>>>>>> >
>>>>>> > Appreciate the reply.... Yes this is exactly what I want to do.
>>>>>> > However when I try to just do a "simple" restore, the job finishes
>>>>>> > with the error previously given.
>>>>>> >
>>>>>> > Any suggestions to do this would be appreciated.
>>>>>> >
>>>>>> > Thanks --Kenny
>>>>>> >
>>>>>> >> On Sat, Sep 6, 2014 at 5:51 PM, Dan Langille <d...@langille.org>
>>>>>> wrote:
>>>>>> >>
>>>>>> >> On Sep 5, 2014, at 5:48 PM, Kenny Noe <knoe...@gmail.com> wrote:
>>>>>> >>
>>>>>> >> Birre,
>>>>>> >>
>>>>>> >> Thanks for the reply. I guess this is where I get lost...
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> >> The fifo is reading a file that was created in the pre-process
>>>>>> called
>>>>>> >> mail.tar. The mail.tar is made from the following directories
>>>>>> /opt/zimbra
>>>>>> >> and /var/mail/zimbra. This is where the Zimbra files and
>>>>>> mailstore were
>>>>>> >> kept.
>>>>>> >>
>>>>>> >> This pre-process is a script that has this :
>>>>>> >>
>>>>>> >> MailBackup.bash
>>>>>> >> #!/bin/bash
>>>>>> >>
>>>>>> >> exec >/dev/null
>>>>>> >>
>>>>>> >> MKDIR="/bin/mkdir"
>>>>>> >> MKFIFO="/usr/bin/mkfifo"
>>>>>> >> RM="/bin/rm"
>>>>>> >> TAR="/bin/tar"
>>>>>> >>
>>>>>> >> DEFCODE=0
>>>>>> >> DUMPBASE="/data/backups"
>>>>>> >>
>>>>>> >> errCode=${DEFCODE}
>>>>>> >> mailDir="/var/mail/zimbra"
>>>>>> >> zimbraDir="/opt/zimbra"
>>>>>> >>
>>>>>> >> Main()
>>>>>> >> {
>>>>>> >> service zimbra stop
>>>>>> >>
>>>>>> >> RunMailRestore
>>>>>> >>
>>>>>> >> service zimbra start
>>>>>> >>
>>>>>> >> ExitScript ${errCode}
>>>>>> >> }
>>>>>> >>
>>>>>> >> RunMailRestore()
>>>>>> >> {
>>>>>> >> EXTENSION=".tar"
>>>>>> >>
>>>>>> >> dumpDir="${DUMPBASE}/mail"
>>>>>> >> fifoDir="${dumpDir}/fifo"
>>>>>> >>
>>>>>> >> RebuildFifoDir
>>>>>> >>
>>>>>> >> ${MKFIFO} ${fifoDir}/mail${EXTENSION}
>>>>>> >>
>>>>>> >> ${TAR} -xpf ${fifoDir}/mail${EXTENSION} 2>&1 </dev/null &
>>>>>> >> }
>>>>>> >>
>>>>>> >> RebuildFifoDir()
>>>>>> >> {
>>>>>> >> if [ -d ${fifoDir} ]
>>>>>> >> then
>>>>>> >> ${RM} -rf ${fifoDir}
>>>>>> >> fi
>>>>>> >>
>>>>>> >> ${MKDIR} -p ${fifoDir}
>>>>>> >> }
>>>>>> >>
>>>>>> >> ExitScript()
>>>>>> >> {
>>>>>> >> exit ${1}
>>>>>> >> }
>>>>>> >>
>>>>>> >> Main
>>>>>> >>
>>>>>> >> The restore script simply does a tar xpf instead of a tar cpf.
>>>>>> >>
>>>>>> >>
>>>>>> >> Perhaps instead of doing that, just restore the data, and then do
>>>>>> the tar
>>>>>> >> xpf later.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> Want excitement?
>>>>> Manually upgrade your production database.
>>>>> When you want reliability, choose Perforce
>>>>> Perforce version control. Predictably reliable.
>>>>>
>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
>>>>> _______________________________________________
>>>>> Bacula-users mailing list
>>>>> Bacula-users@lists.sourceforge.net
>>>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> ============================================
>>>> Heitor Medrado de Faria | Need Bacula training? 10% discount coupon
>>>> code at Udemy: bacula-users
>>>> <https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
>>>> +55 61 2021-8260
>>>> +55 61 8268-4220
>>>> Site: www.bacula.com.br
>>>> Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
>>>> Gtalk: heitorfa...@gmail.com
>>>> ============================================
>>>>
>>>
>>>
>>
>>
>> --
>> ============================================
>> Heitor Medrado de Faria | Need Bacula training? 10% discount coupon code
>> at Udemy: bacula-users
>> <https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
>> +55 61 2021-8260
>> +55 61 8268-4220
>> Site: www.bacula.com.br
>> Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
>> Gtalk: heitorfa...@gmail.com
>> ============================================
>>
>
>
--
============================================
Heitor Medrado de Faria | Need Bacula training? 10% discount coupon code at
Udemy: bacula-users
<https://www.udemy.com/bacula-backup-software/?couponCode=bacula-users>
+55 61 2021-8260
+55 61 8268-4220
Site: www.bacula.com.br
Facebook: heitor.faria <http://www.facebook.com/heitor.faria>
Gtalk: heitorfa...@gmail.com
============================================
------------------------------------------------------------------------------
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users