Re: [Bacula-users] run job problem

2009-12-29 Thread Łukasz PUZON Brodowski
W dniu 2009-12-29 08:50, FredNF pisze:
 Lukasz PUZON Brodowski a écrit :
 hi all.
 I install bacula 3.0.3 on 7.2 freebsd system, with mysql (sqlite
 before). in both cases, when I run my only job, bacula-dir crashes. I
 run bacula with -d100 option, and last error is:
 ABORTING due to ERROR in lockmgr.c:65 Mutex lock failure. ERR=Resource
 deadlock avoided

I found soulution: bacula crash when it can not write log file. I found 
it in -d999 mode.


 Hi bacula users :)

 As I was experiencing the same problem, and I maybe have the response :)
 I'm currently using bacula on a FreeBSD 8.0 server. Installed using ports.

 Check your charset on the database. Mine was UTF-8, I didn't check when
 I've created the database. After dump the DB, droping it and recreate it
 with correct encoding (SQL_ASCII), no more troubles.

 A postgresql-centric little howto:

 # pg_dump -Ubacula bacula  ~/bacula.sql
 # su - pgsql
 $ psql postgres
 postgres=# DROP DATABASE bacula;
 postgres=# CREATE DATABASE bacula WITH ENCODING 'SQL_ASCII';
 postgres=# \q
 $ exit
 # vi ~/bacula.sql (and change the line
 SET client_encoding = 'UTF8';
 to
 SET client_encoding = 'SQL_ASCII';)
 # cat ~/bacula.sql | psql -Ubacula

 And, now, bacula should not crash anymore.

 Hope it helps.

 Regards,

 Fred.


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Restore daily ?

2009-12-29 Thread Jean-François Leroux
Hi, , I'm using bacula-1.38.11.8 on debian Etch. I backup several
servers with bacula, these machines being added at the end of the
general bacula-dir.conf with the sign '@' , e.g '@machine1.conf'

Now, I would like to create a job for restoring daily some files from
one machine to another.
I already have a general restore job in my bacula-dir.conf. So the
question is : do I add a restore job in each of these 'machine' files
so that I can restore files from these machines in different places
and a t different times?
How do I add this restore job to the job currently done? The schedule
resource doesn't mention which job is to be done.
For example in my machine1.conf, I have

Job {
  Name = Backup-Machine1
  Type = Backup
  Level = Full
  Client = machine1-fd
  Fileset = Machine1-Fileset
  Messages = Standard
  Storage = Machine1-Storage
  Pool = Machine1-Pool
  Full Backup Pool = Machine1-Full-Pool
  Differential Backup  Pool = Machine1-Diff-Pool
  Incremental Backup Pool = Machine1-Inc-Pool
  Schedule=Machine1Cycle
  Write Bootstrap = /var/lib/bacula/Machine1.bsr
}

Schedule {
  Name = Machine1Cycle
  run =Level= Full monthly 1st sun at 4:45
  run =Level= Differential weekly 2nd-5th sun at 4:45
  run =Level= Incremental mon-sat at 4:45
}

FileSet {
  Name = Machine1-Fileset
  Include {
Options{
  Compression=GZIP
  signature=SHA1
  wildfile = *.run
  Exclude = yes
}
File = /home
File = /root
File = /etc
File = /var
  }

Don't know if this is clear. To sum it up: I want to restore files
from the backups on a daily basis. How do I do that (not manually) in
my machine1.conf?

Thanks for your help :)

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Ghost Job

2009-12-29 Thread Bruno Friedmann
On 12/28/2009 02:57 PM, Moabe Ferreira Domingos wrote:
 Hello,
 
 I'm having the following problem: when I see the status of the director it 
 shows 
 the job as if I was running, but there is no transfer of files. Analyzing the 
 status of the client in question, I see that there are actually two jobs 
 running, but the first job appears just on client. On director, the jobid it 
 has 
 been completed.
 
 Below is the output from status dir in bconsole, as shown there is no job 
 running at the moment.
 
 status dir
 
 Scheduled Jobs:
 Level Type Pri Scheduled Name Volume
 ===
 Full Backup 5 28-Dez-09 19:00 JBL-matriz-File01 JBL-V07
 
 
 Running Jobs:
 No Jobs running.
 
 No Terminated Jobs.
 
 *
 
 After running a any job the scenario is :
 
 *status dir
 Scheduled Jobs:
 Level Type Pri Scheduled Name Volume
 ===
 Full Backup 5 28-Dez-09 19:00 JBL-matriz-File01 JBL-V07
 
 
 Running Jobs:
 JobId Level Name Status
 ==
 1217 Differe JBL-matriz-File02.2009-12-28_10.33.56.05 is running
 
 No Terminated Jobs.
 
 *
 
 *stautus client=JBL
 
 Running Jobs:
 JobId 1046 Job JBL-matriz-File02.2009-12-25_20.00.00.41 is running.
 Backup Job started: 26-Dez-09 06:48
 Files=0 Bytes=0 Bytes/sec=0 Errors=0
 Files Examined=0
 SDReadSeqNo=5 fd=5
 JobId 1217 Job JBL-matriz-File02.2009-12-28_10.33.56.05 is running.
 Backup Job started: 28-Dez-09 10:34
 Files=0 Bytes=0 Bytes/sec=0 Errors=0
 Files Examined=0
 SDReadSeqNo=5 fd=7
 Director connected at: 28-Dez-09 10:34
 
 
 As seen above, there are two jobs running on the client, but only the second, 
 JobId 1217, is actually running on the director. The first, JobId 1046, has 
 been 
 completed the director, which does not leave me no choice but restart the 
 bacula-dir.
 
 Has anyone experienced this problem? Any idea how to solve it?
 

For information : as I'm tracking a potential bug
Could you check if your message log ( bsmtp sending mail ) is working ?
Aka when the job is finished you should also receive message on bconsole ( m 
command )


If not, try correct it and send result here on ml.


-- 

 Bruno Friedmann


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore daily ?

2009-12-29 Thread Avi Rozen
Jean-François Leroux wrote:

 Hi, , I'm using bacula-1.38.11.8 on debian Etch. I backup several
 servers with bacula, these machines being added at the end of the
 general bacula-dir.conf with the sign '@' , e.g '@machine1.conf'

 Now, I would like to create a job for restoring daily some files from
 one machine to another.
 I already have a general restore job in my bacula-dir.conf. So the
 question is : do I add a restore job in each of these 'machine' files
 so that I can restore files from these machines in different places
 and a t different times?
 How do I add this restore job to the job currently done? The schedule
 resource doesn't mention which job is to be done.
   

For a while I had such a setup, where I used RunAfterJob to run a script
containing something like this:

bconsole EOF
restore client=machine-cycle-fd restoreclient=machine-cycle-fd
fileset=machine-cycle-fileset select current all done
5
yes
quit
EOF

The '5' selects the right restore job definitions on my setup, 'yes'
confirms that the job parameters are correct and 'quit' exits bconsole.

The restore job looked like this (note the 'ifnewer'):

Job {
  Name = snapshot-restore-job
  Type = Restore
  Storage = File
  Messages = Standard
  RunBeforeJob = /etc/bacula/scripts/run-before-job.sh %n
  RunAfterJob = /etc/bacula/scripts/run-after-job.sh %n
  Client = machine-cycle-fd
  FileSet = machine-cycle-fileset
  Pool = machine-cycle-pool
  Full Backup Pool = machine-cycle-full-pool
  Where = /mnt/gigapod/data/snapshot
  Replace = ifnewer
}

Mind you, this is bacula 3.0 and I used an admin job for the task, but
something along these lines would probably work with 1.38 from within a
regular backup job. I expect that the restore command would be somewhat
trickier to run too (you may need to emulate interaction in order to
modify the restore client, etc.)

Hope this helps,
Avi.


 For example in my machine1.conf, I have

 Job {
   Name = Backup-Machine1
   Type = Backup
   Level = Full
   Client = machine1-fd
   Fileset = Machine1-Fileset
   Messages = Standard
   Storage = Machine1-Storage
   Pool = Machine1-Pool
   Full Backup Pool = Machine1-Full-Pool
   Differential Backup  Pool = Machine1-Diff-Pool
   Incremental Backup Pool = Machine1-Inc-Pool
   Schedule=Machine1Cycle
   Write Bootstrap = /var/lib/bacula/Machine1.bsr
 }

 Schedule {
   Name = Machine1Cycle
   run =Level= Full monthly 1st sun at 4:45
   run =Level= Differential weekly 2nd-5th sun at 4:45
   run =Level= Incremental mon-sat at 4:45
 }

 FileSet {
   Name = Machine1-Fileset
   Include {
 Options{
   Compression=GZIP
   signature=SHA1
   wildfile = *.run
   Exclude = yes
 }
 File = /home
 File = /root
 File = /etc
 File = /var
   }

 Don't know if this is clear. To sum it up: I want to restore files
 from the backups on a daily basis. How do I do that (not manually) in
 my machine1.conf?

 Thanks for your help :)

 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users
   


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Restore daily ?

2009-12-29 Thread Jean-François Leroux
Thanks a lot Avi. I'll try that and tell you how it goes :)

Cheers.
Jean-François Leroux

2009/12/29 Avi Rozen avi.ro...@gmail.com:
 Jean-François Leroux wrote:

 Hi, , I'm using bacula-1.38.11.8 on debian Etch. I backup several
 servers with bacula, these machines being added at the end of the
 general bacula-dir.conf with the sign '@' , e.g '@machine1.conf'

 Now, I would like to create a job for restoring daily some files from
 one machine to another.
 I already have a general restore job in my bacula-dir.conf. So the
 question is : do I add a restore job in each of these 'machine' files
 so that I can restore files from these machines in different places
 and a t different times?
 How do I add this restore job to the job currently done? The schedule
 resource doesn't mention which job is to be done.


 For a while I had such a setup, where I used RunAfterJob to run a script
 containing something like this:

 bconsole EOF
 restore client=machine-cycle-fd restoreclient=machine-cycle-fd
 fileset=machine-cycle-fileset select current all done
 5
 yes
 quit
 EOF

 The '5' selects the right restore job definitions on my setup, 'yes'
 confirms that the job parameters are correct and 'quit' exits bconsole.

 The restore job looked like this (note the 'ifnewer'):

 Job {
  Name = snapshot-restore-job
  Type = Restore
  Storage = File
  Messages = Standard
  RunBeforeJob = /etc/bacula/scripts/run-before-job.sh %n
  RunAfterJob = /etc/bacula/scripts/run-after-job.sh %n
  Client = machine-cycle-fd
  FileSet = machine-cycle-fileset
  Pool = machine-cycle-pool
  Full Backup Pool = machine-cycle-full-pool
  Where = /mnt/gigapod/data/snapshot
  Replace = ifnewer
 }

 Mind you, this is bacula 3.0 and I used an admin job for the task, but
 something along these lines would probably work with 1.38 from within a
 regular backup job. I expect that the restore command would be somewhat
 trickier to run too (you may need to emulate interaction in order to
 modify the restore client, etc.)

 Hope this helps,
 Avi.


 For example in my machine1.conf, I have

 Job {
   Name = Backup-Machine1
   Type = Backup
   Level = Full
   Client = machine1-fd
   Fileset = Machine1-Fileset
   Messages = Standard
   Storage = Machine1-Storage
   Pool = Machine1-Pool
   Full Backup Pool = Machine1-Full-Pool
   Differential Backup  Pool = Machine1-Diff-Pool
   Incremental Backup Pool = Machine1-Inc-Pool
   Schedule=Machine1Cycle
   Write Bootstrap = /var/lib/bacula/Machine1.bsr
 }

 Schedule {
   Name = Machine1Cycle
   run =Level= Full monthly 1st sun at 4:45
   run =Level= Differential weekly 2nd-5th sun at 4:45
   run =Level= Incremental mon-sat at 4:45
 }

 FileSet {
   Name = Machine1-Fileset
   Include {
     Options{
       Compression=GZIP
       signature=SHA1
       wildfile = *.run
       Exclude = yes
     }
     File = /home
     File = /root
     File = /etc
     File = /var
   }

 Don't know if this is clear. To sum it up: I want to restore files
 from the backups on a daily basis. How do I do that (not manually) in
 my machine1.conf?

 Thanks for your help :)

 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users




--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] 3924 Device not in SD Device resources Error on Windows backups only

2009-12-29 Thread M. Sébastien LELIEVRE




M. Sébastien LELIEVRE a écrit :

  
  Greetings everyone,
  
I keep receiving the error I paste below. This error only occurs on
Windows client saves ; Linux clients are saved just fine !
  
I do not understand line 4 below since it points to a tse01 volume
(tse01 is another Windows server) ; configuration seems ok tough. 
  
What am I missing ?
  
Note : this behavior occurs on bacula3 (compiled by myself) and bacula2
(lenny debs) and for all Windows machines I try to save. All linux
saves go just fine. 
  
Below are :
1- log file of a windows backup
2- bacula sd conf (password blanked for paste)
3- windows backup conf (password blanked for paste)
  
# cat client-trt01.log
28-déc 17:43 abfsav01 JobId 10637: No prior Full backup Job record
found.
28-déc 17:43 abfsav01 JobId 10637: No prior or suitable Full backup
found in catalog. Doing FULL backup.
28-déc 17:43 abfsav01 JobId 10637: Start Backup JobId 10637,
Job=trt01.2009-12-28_17.43.45
28-déc 17:43 abfstor1-sd JobId 10637: Failed command: 1998 Volume
"tse01-diff05" status is Append, not in Pool.
  
28-déc 17:43 abfstor1-sd JobId 10637: Fatal error:
 Device "trt01-dv" with MediaType "file" requested by DIR not found
in SD Device resources.
28-déc 17:43 abfsav01 JobId 10637: Fatal error:
 Storage daemon didn't accept Device "trt01-dv" because:
 3924 Device "trt01-dv" not in SD Device resources.
28-déc 17:43 abfsav01 JobId 10637: Error: Bacula abfsav01 2.2.8
(26Jan08): 28-déc-2009 17:43:48
  Build OS:   i486-pc-linux-gnu debian lenny/sid
  JobId:  10637
  Job:    trt01.2009-12-28_17.43.45
  Backup Level:   Full (upgraded from Incremental)
  Client: "trt01"
  FileSet:    "trt01-fs" 2009-05-19 23:00:00
  Pool:   "trt01-full" (From Job resource)
  Storage:    "trt01-sd" (From Pool resource)
  Scheduled time: 28-déc-2009 17:43:44
  Start time: 28-déc-2009 17:43:48
  End time:   28-déc-2009 17:43:48
  Elapsed time:   0 secs
  Priority:   10
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0 (0 B)
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Software Compression:   None
  VSS:    no
  Storage Encryption: no
  Volume name(s):
  Volume Session Id:  23
  Volume Session Time:    1261666154
  Last Volume Bytes:  0 (0 B)
  Non-fatal FD errors:    0
  SD Errors:  0
  FD termination status:
  SD termination status:
  Termination:    *** Backup Error ***
  
~# cat /etc/bacula/bacula-sd.conf
Storage {
    Name = stor1-sd
    WorkingDirectory = "/var/lib/bacula"
    Pid Directory = "/var/run/bacula"
    Maximum Concurrent Jobs = 20
    SDAddresses = {
    ip = { addr = 192.168.254.13; port = 9103; }
    }
}
  
Director {
    Name = abfsav01
    Password = ""
}
  
Messages {
    Name = Standard
    Director = abfsav01 = all
}
  
Device {
    Name = trt01-dv
    Media Type = File
    Archive Device = /media/stor01/windows/trt01
    Random Access = Yes;
    AutomaticMount = Yes;
    Removable Media = No;
    AlwaysOpen = No;
}
  
Device {
    Name = tse01-dv
    Media Type = File
    Archive Device = /media/stor01/windows/tse01
    Random Access = Yes;
    AutomaticMount = Yes;
    Removable Media = No;
    AlwaysOpen = No;
}
  
  Device {
    Name = catalog-dv
    Media Type = File
    Archive Device = /media/stor01/linux/catalog
    Random Access = Yes;
    AutomaticMount = Yes;
    Removable Media = No;
    AlwaysOpen = No;
}
  
  Device {
    Name = oracle-dv
    Media Type = File
    Archive Device = /media/stor01/linux/oracle
    Random Access = Yes;
    AutomaticMount = Yes;
    Removable Media = No;
    AlwaysOpen = No;
}
  
:~# cat /etc/bacula/include/windows/trt01-dir.include
Client {
    Name = "trt01"
    Address = 192.168.128.11
    Catalog = ABFSAVCatalog01
    Password = ""
    Heartbeat Interval = 1 minutes
    AutoPrune = No
}
  
FileSet {
    Name =trt01-fs
    Include {
    File = c:/client
    File = d:/ERP
    File = c:/WINDOWS/erp.ini
    File = d:/backupAD/trt01.bkf
    }
}
  
Pool {
    Name = "trt01-full"
    Pool Type = backup
    Recycle Oldest Volume = yes
    Volume retention = 1 day
    Maximum Volume Jobs = 1
    Next Pool = tapes
    Storage = trt01-sd
}
  
Pool {
    Name = "trt01-diff"
    Maximum Volume Jobs = 1
    Volume retention = 1 day
    Recycle Oldest Volume = yes
    Pool Type = Backup
    Next Pool = tapes
    Storage = trt01-sd
}
  
Storage {
    Name = trt01-sd
    Address = 192.168.254.13
    Device = trt01-dv
    Media Type = file
  

Re: [Bacula-users] 3924 Device not in SD Device resources Error on Windows backups only

2009-12-29 Thread John Drescher
Did you restart the sd after adding tse01-dv?

Does your linux machines use tse01-dv?

John

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus

On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:

 * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
 I'm trying to restore files I have backed up on the NFS server that I'm 
 using to back VMware, but I'm getting similar errors to this every time I 
 try to restore:
 
 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
 Restore.2009-12-28_12.10.28_54
 28-Dec 12:10 krikkit-dir JobId 1433: Using Device TL2000-1
 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger unload slot 
 11, drive 0 command.
 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger load slot 3, 
 drive 0 command.
 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger load slot 3, drive 0, 
 status is OK.
 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume 09L4 on 
 device TL2000-1 (/dev/rmt/0n).
 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume 09L4 to 
 file:block 473:0.
 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
 at file:blk 475:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
 TL2000-1 (/dev/rmt/0n), Volume 09L4
 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
 restored file 
 /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
 correct. Original 8589934592, restored 445841408.
 
 Files are backed up from a zfs snapshot which is created just before the 
 backup starts. Every other file I am attempting to restore works just 
 fine... 
 
 Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
 servers that have .vmdk files on it?
 
 No, but I could imagine that this might have something to do with
 some sparse-file setting.
 
 Have you checked how much space of your 8GB flat vmdk is aktually being
 used? Maybe this was 445841408 Bytes at backup time?
 
 Does the same happen if you do not use pre-allocated vmdk-disks?
 (Which is better anyway most of the times if you use NFS instead of vmfs)
 

All I use is preallocated disks especially on NFS.. I don't think I can 
actually use sparse disks on NFS.

As a test, I created a 100Gb file from /dev/zero, and tried backing that up and 
restoring it, and I get this:

29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
file:blk 13:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
TL2000-1 (/dev/rmt/0n), Volume 10L4
29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
66365161472, restored 376340827.

So, this tells me that whatever's going on, it's not Vmware that's causing me 
the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
backups, or just something with large files and Bacula?

Paul
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Fahrer, Julian
What solaris are u using?
Is zfs compression/ dedup enabled?
Maybe I could run some test for u. I had no problems with zfs so far


-Ursprüngliche Nachricht-
Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
Gesendet: Dienstag, 29. Dezember 2009 21:52
An: bacula-users@lists.sourceforge.net
Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS


On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:

 * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
 I'm trying to restore files I have backed up on the NFS server that I'm 
 using to back VMware, but I'm getting similar errors to this every time I 
 try to restore:
 
 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
 Restore.2009-12-28_12.10.28_54
 28-Dec 12:10 krikkit-dir JobId 1433: Using Device TL2000-1
 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger unload slot 
 11, drive 0 command.
 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger load slot 3, 
 drive 0 command.
 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger load slot 3, drive 0, 
 status is OK.
 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume 09L4 on 
 device TL2000-1 (/dev/rmt/0n).
 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume 09L4 to 
 file:block 473:0.
 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
 at file:blk 475:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
 TL2000-1 (/dev/rmt/0n), Volume 09L4
 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
 restored file 
 /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
 correct. Original 8589934592, restored 445841408.
 
 Files are backed up from a zfs snapshot which is created just before the 
 backup starts. Every other file I am attempting to restore works just 
 fine... 
 
 Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
 servers that have .vmdk files on it?
 
 No, but I could imagine that this might have something to do with
 some sparse-file setting.
 
 Have you checked how much space of your 8GB flat vmdk is aktually being
 used? Maybe this was 445841408 Bytes at backup time?
 
 Does the same happen if you do not use pre-allocated vmdk-disks?
 (Which is better anyway most of the times if you use NFS instead of vmfs)
 

All I use is preallocated disks especially on NFS.. I don't think I can 
actually use sparse disks on NFS.

As a test, I created a 100Gb file from /dev/zero, and tried backing that up and 
restoring it, and I get this:

29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
file:blk 13:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
TL2000-1 (/dev/rmt/0n), Volume 10L4
29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
66365161472, restored 376340827.

So, this tells me that whatever's going on, it's not Vmware that's causing me 
the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
backups, or just something with large files and Bacula?

Paul
--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus
Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically 
enabled with compression or dedup enabled.

Can you try backing up and restoring a 100Gb file full of zeros from a snapshot?

Paul

On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:

 What solaris are u using?
 Is zfs compression/ dedup enabled?
 Maybe I could run some test for u. I had no problems with zfs so far
 
 
 -Ursprüngliche Nachricht-
 Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
 Gesendet: Dienstag, 29. Dezember 2009 21:52
 An: bacula-users@lists.sourceforge.net
 Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS
 
 
 On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
 
 * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
 I'm trying to restore files I have backed up on the NFS server that I'm 
 using to back VMware, but I'm getting similar errors to this every time I 
 try to restore:
 
 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
 Restore.2009-12-28_12.10.28_54
 28-Dec 12:10 krikkit-dir JobId 1433: Using Device TL2000-1
 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger unload slot 
 11, drive 0 command.
 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger load slot 3, 
 drive 0 command.
 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger load slot 3, drive 
 0, status is OK.
 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume 09L4 on 
 device TL2000-1 (/dev/rmt/0n).
 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume 09L4 to 
 file:block 473:0.
 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
 at file:blk 475:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
 TL2000-1 (/dev/rmt/0n), Volume 09L4
 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
 restored file 
 /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
 correct. Original 8589934592, restored 445841408.
 
 Files are backed up from a zfs snapshot which is created just before the 
 backup starts. Every other file I am attempting to restore works just 
 fine... 
 
 Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
 servers that have .vmdk files on it?
 
 No, but I could imagine that this might have something to do with
 some sparse-file setting.
 
 Have you checked how much space of your 8GB flat vmdk is aktually being
 used? Maybe this was 445841408 Bytes at backup time?
 
 Does the same happen if you do not use pre-allocated vmdk-disks?
 (Which is better anyway most of the times if you use NFS instead of vmfs)
 
 
 All I use is preallocated disks especially on NFS.. I don't think I can 
 actually use sparse disks on NFS.
 
 As a test, I created a 100Gb file from /dev/zero, and tried backing that up 
 and restoring it, and I get this:
 
 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
 file:blk 13:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
 TL2000-1 (/dev/rmt/0n), Volume 10L4
 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
 file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
 66365161472, restored 376340827.
 
 So, this tells me that whatever's going on, it's not Vmware that's causing me 
 the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
 backups, or just something with large files and Bacula?
 
 Paul
 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Fahrer, Julian
Hey paul,

i don't have enough space on the test system right now. Just created a new zfs 
without compression/dedup and a 1gb file on an solaris 10u6 system.
I could backup and restore from a snapshot without errors.

Could u post your zfs config?
zfs get all zfs-name

Julian

-Ursprüngliche Nachricht-
Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
Gesendet: Dienstag, 29. Dezember 2009 23:00
An: Fahrer, Julian
Cc: bacula-users@lists.sourceforge.net
Betreff: Re: AW: [Bacula-users] Cannot restore VMmware/ZFS

Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically 
enabled with compression or dedup enabled.

Can you try backing up and restoring a 100Gb file full of zeros from a snapshot?

Paul

On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:

 What solaris are u using?
 Is zfs compression/ dedup enabled?
 Maybe I could run some test for u. I had no problems with zfs so far
 
 
 -Ursprüngliche Nachricht-
 Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
 Gesendet: Dienstag, 29. Dezember 2009 21:52
 An: bacula-users@lists.sourceforge.net
 Betreff: Re: [Bacula-users] Cannot restore VMmware/ZFS
 
 
 On 2009-12-28, at 6:56 PM, Marc Schiffbauer wrote:
 
 * Paul Greidanus schrieb am 28.12.09 um 23:44 Uhr:
 I'm trying to restore files I have backed up on the NFS server that I'm 
 using to back VMware, but I'm getting similar errors to this every time I 
 try to restore:
 
 28-Dec 12:10 krikkit-dir JobId 1433: Start Restore Job 
 Restore.2009-12-28_12.10.28_54
 28-Dec 12:10 krikkit-dir JobId 1433: Using Device TL2000-1
 28-Dec 12:10 krikkit-sd JobId 1433: 3307 Issuing autochanger unload slot 
 11, drive 0 command.
 28-Dec 12:11 krikkit-sd JobId 1433: 3304 Issuing autochanger load slot 3, 
 drive 0 command.
 28-Dec 12:12 krikkit-sd JobId 1433: 3305 Autochanger load slot 3, drive 
 0, status is OK.
 28-Dec 12:12 krikkit-sd JobId 1433: Ready to read from volume 09L4 on 
 device TL2000-1 (/dev/rmt/0n).
 28-Dec 12:12 krikkit-sd JobId 1433: Forward spacing Volume 09L4 to 
 file:block 473:0.
 28-Dec 12:15 krikkit-sd JobId 1433: Error: block.c:1010 Read error on fd=4 
 at file:blk 475:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 28-Dec 12:15 krikkit-sd JobId 1433: End of Volume at file 475 on device 
 TL2000-1 (/dev/rmt/0n), Volume 09L4
 28-Dec 12:15 krikkit-sd JobId 1433: End of all volumes.
 28-Dec 12:15 krikkit-fd JobId 1433: Error: attribs.c:423 File size of 
 restored file 
 /backupspool/rpool/vm2/.zfs/snapshot/backup/InvidiCA/InvidiCA-flat.vmdk not 
 correct. Original 8589934592, restored 445841408.
 
 Files are backed up from a zfs snapshot which is created just before the 
 backup starts. Every other file I am attempting to restore works just 
 fine... 
 
 Is anyone out there doing ZFS snapshots for VMware, or backing up NFS 
 servers that have .vmdk files on it?
 
 No, but I could imagine that this might have something to do with
 some sparse-file setting.
 
 Have you checked how much space of your 8GB flat vmdk is aktually being
 used? Maybe this was 445841408 Bytes at backup time?
 
 Does the same happen if you do not use pre-allocated vmdk-disks?
 (Which is better anyway most of the times if you use NFS instead of vmfs)
 
 
 All I use is preallocated disks especially on NFS.. I don't think I can 
 actually use sparse disks on NFS.
 
 As a test, I created a 100Gb file from /dev/zero, and tried backing that up 
 and restoring it, and I get this:
 
 29-Dec 13:45 krikkit-sd JobId 1446: Error: block.c:1010 Read error on fd=4 at 
 file:blk 13:0 on device TL2000-1 (/dev/rmt/0n). ERR=I/O error.
 29-Dec 13:45 krikkit-sd JobId 1446: End of Volume at file 13 on device 
 TL2000-1 (/dev/rmt/0n), Volume 10L4
 29-Dec 13:45 krikkit-sd JobId 1446: End of all volumes.
 29-Dec 13:46 filer2-fd JobId 1446: Error: attribs.c:423 File size of restored 
 file /scratch/rpool/vm2/.zfs/snapshot/backup/100GbTest not correct. Original 
 66365161472, restored 376340827.
 
 So, this tells me that whatever's going on, it's not Vmware that's causing me 
 the troubles.. I'm wondering if I'm running into problems with ZFS snapshot 
 backups, or just something with large files and Bacula?
 
 Paul
 --
 This SF.Net email is sponsored by the Verizon Developer Community
 Take advantage of Verizon's best-in-class app development support
 A streamlined, 14 day to market process makes app distribution fast and easy
 Join now and get one step closer to millions of Verizon customers
 http://p.sf.net/sfu/verizon-dev2dev 
 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A 

Re: [Bacula-users] Virtual Tape Emulation

2009-12-29 Thread Arno Lehmann
Hi,

29.12.2009 11:02, Yuri Timofeev wrote:
 Hi
 
 You can send me your bacula-*.conf files and scripts to configure a
 virtual autochanger ?

Sure... also to the list, as I guess it might be interesting to 
others, too...

This is one of several disk-based virtual autochangers:

bac...@gnom:~ cat /baculadata1/filestorage01/disk-changer.conf
maxslot=700
maxdrive=4

 From the SD conf:
Autochanger {
   Name = File01
   Device = File01a
   Device = File01b
   Device = File01c
   Device = File01d
   Device = File01e
   Changer Command = /opt/bacula/etc/disk-changer %c %o %S %a %d
   Changer Device = /baculadata1/filestorage01/disk-changer.conf
}

Device {
   Name = File01a # a to e
   Media Type = File01
   Device Type = File
   Random Access = Yes
   Archive Device = /baculadata1/filestorage01/drive0 # 0 to 4
   Drive Index = 0 # 0 to 4
   Label Media = Yes
   Autoselect = Yes # No for File01e / Drive 4
   Maximum Spool Size = 10G # spool sizes as needed.
   Maximum Job Spool Size = 10G #  Sounds nonsense for disk,
 # but I'm using slow external ones here...
   Spool Directory = /baculaspool
   Autochanger = Yes
}

 From the DIR conf:
Storage {
   Name = FileCh
   Address = gnom
   SDPort = 9103
   Password = sure
   Device = File01
   Media Type = File01
   Autochanger = Yes
   Maximum Concurrent Jobs = 8
}

Pool {
   Name = Whatever
   Pool Type = Backup
   Recycle = Yes
   Auto Prune = Yes
   Volume Retention = 3 months # as needed
   Storage = FileCh
   Maximum Volume Jobs = 1 # rather important IMO
   Maximum Volume Bytes = 3G # no. of volumes * this size
# determined experimentally... 2.1 TB is *more* than the disks
# capacity, but on average, I've got one volume half full per job
   Volume Use Duration = 6h
   Catalog Files = Yes
   Scratch Pool = DiskScr # This is a device-specific scratch pool
   Recycle Pool = DiskScr
   Next Pool = DLTLong # migration to tape
}

Status storage output snippets:
Autochanger File01 with devices:
File01a (/baculadata1/filestorage01/drive0)
File01b (/baculadata1/filestorage01/drive1)
File01c (/baculadata1/filestorage01/drive2)
File01d (/baculadata1/filestorage01/drive3)
File01e (/baculadata1/filestorage01/drive4)
...
Device File01a (/baculadata1/filestorage01/drive0) is not open.
 Slot 591 is loaded in drive 0.
Device File01b (/baculadata1/filestorage01/drive1) is not open.
 Slot 523 is loaded in drive 1.
Device File01c (/baculadata1/filestorage01/drive2) is not open.
 Slot 78 is loaded in drive 2.
Device File01d (/baculadata1/filestorage01/drive3) is not open.
 Slot 474 is loaded in drive 3.
Device File01e (/baculadata1/filestorage01/drive4) is not open.
 Drive 4 status unknown.


And I excluded disk volumes from query 15...

Does that help?

Arno

 2009/12/18 Arno Lehmann a...@its-lehmann.de:
 Hello,

 17.12.2009 14:41, John Drescher wrote:
 is there any sample config for using a virtual tape library storing
 files on disk out there?

 There is vchanger that I use at home.

 http://sourceforge.net/projects/vchanger/
 With the hard disks constantly available, I'm having a stable setup
 using the project-provided disk-changer script.

 Essentially, you define a virtual autochanger with a configured number
 of slots and drives. You should limit the volume size Bacula generates
 for those devices to ensure you don't run out of disk space.

 Current SD device status:

 Autochanger File01 with devices:
File01a (/baculadata1/filestorage01/drive0)
File01b (/baculadata1/filestorage01/drive1)
File01c (/baculadata1/filestorage01/drive2)
File01d (/baculadata1/filestorage01/drive3)
File01e (/baculadata1/filestorage01/drive4)
 Autochanger File03 with devices:
File03a (/baculadata2/filestorage03/drive0)
File03b (/baculadata2/filestorage03/drive1)
File03c (/baculadata2/filestorage03/drive2)
File03d (/baculadata2/filestorage03/drive3)
File03e (/baculadata2/filestorage03/drive4)

 and

 Device File01a (/baculadata1/filestorage01/drive0) is not open.
 Slot 543 is loaded in drive 0.
 Device File01b (/baculadata1/filestorage01/drive1) is not open.
 Slot 570 is loaded in drive 1.
 Device File01c (/baculadata1/filestorage01/drive2) is not open.
 Slot 471 is loaded in drive 2.
 Device File01d (/baculadata1/filestorage01/drive3) is not open.
 Slot 474 is loaded in drive 3.
 Device File01e (/baculadata1/filestorage01/drive4) is not open.
 Drive 4 status unknown.
 Device File03a (/baculadata2/filestorage03/drive0) is not open.
 Slot 363 is loaded in drive 0.
 Device File03b (/baculadata2/filestorage03/drive1) is not open.
 Slot 290 is loaded in drive 1.
 Device File03c (/baculadata2/filestorage03/drive2) is not open.
 Drive 2 status unknown.
 Device File03d (/baculadata2/filestorage03/drive3) is not open.
 Drive 3 status unknown.
 Device File03e (/baculadata2/filestorage03/drive4) is not open.
 Drive 4 status unknown.

 Lots of slots to manage, which makes using 

Re: [Bacula-users] Cannot restore VMmware/ZFS

2009-12-29 Thread Paul Greidanus
Hi Julian,

Here's the info for that filesystem.  I also just tried my 100Gb test, which 
fails both on the filesystem itself, and the snapshot.  I don't have problems 
with 1Gb files either... 

NAME   PROPERTYVALUE   
SOURCE
rpool/vm2  typefilesystem  -
rpool/vm2  creationFri Nov  6 14:47 2009   -
rpool/vm2  used116G-
rpool/vm2  available   751G-
rpool/vm2  referenced  116G-
rpool/vm2  compressratio   1.00x   -
rpool/vm2  mounted yes -
rpool/vm2  quota   none
default
rpool/vm2  reservation none
default
rpool/vm2  recordsize  128K
default
rpool/vm2  mountpoint  /rpool/vm2  
default
rpool/vm2  sharenfsrw,root=vmsrv2,anon=0   local
rpool/vm2  checksumon  
default
rpool/vm2  compression off 
default
rpool/vm2  atime   on  
default
rpool/vm2  devices on  
default
rpool/vm2  execon  
default
rpool/vm2  setuid  on  
default
rpool/vm2  readonlyoff 
default
rpool/vm2  zoned   off 
default
rpool/vm2  snapdir hidden  
default
rpool/vm2  aclmode groupmask   
default
rpool/vm2  aclinherit  restricted  
default
rpool/vm2  canmounton  
default
rpool/vm2  shareiscsi  off 
default
rpool/vm2  xattr   on  
default
rpool/vm2  copies  1   
default
rpool/vm2  version 3   -
rpool/vm2  utf8onlyoff -
rpool/vm2  normalization   none-
rpool/vm2  casesensitivity sensitive   -
rpool/vm2  vscan   off 
default
rpool/vm2  nbmand  off 
default
rpool/vm2  sharesmboff 
default
rpool/vm2  refquotanone
default
rpool/vm2  refreservation  none
default
rpool/vm2  primarycacheall 
default
rpool/vm2  secondarycache  all 
default
rpool/vm2  usedbysnapshots 14.9M   -
rpool/vm2  usedbydataset   116G-
rpool/vm2  usedbychildren  0   -
rpool/vm2  usedbyrefreservation0   -
rpool/vm2  org.opensolaris.caiman:install  ready   
inherited from rpool

On 2009-12-29, at 4:11 PM, Fahrer, Julian wrote:

 Hey paul,
 
 i don't have enough space on the test system right now. Just created a new 
 zfs without compression/dedup and a 1gb file on an solaris 10u6 system.
 I could backup and restore from a snapshot without errors.
 
 Could u post your zfs config?
 zfs get all zfs-name
 
 Julian
 
 -Ursprüngliche Nachricht-
 Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
 Gesendet: Dienstag, 29. Dezember 2009 23:00
 An: Fahrer, Julian
 Cc: bacula-users@lists.sourceforge.net
 Betreff: Re: AW: [Bacula-users] Cannot restore VMmware/ZFS
 
 Solaris is OpenSolaris 2009.06 and I don't think I have anything specifically 
 enabled with compression or dedup enabled.
 
 Can you try backing up and restoring a 100Gb file full of zeros from a 
 snapshot?
 
 Paul
 
 On 2009-12-29, at 2:53 PM, Fahrer, Julian wrote:
 
 What solaris are u using?
 Is zfs compression/ dedup enabled?
 Maybe I could run some test for u. I had no problems with zfs so far
 
 
 -Ursprüngliche Nachricht-
 Von: Paul Greidanus [mailto:paul.greida...@gmail.com] 
 Gesendet: Dienstag, 29. Dezember 2009 21:52
 An: bacula-users@lists.sourceforge.net
 Betreff: Re: 

[Bacula-users] Backup diffrensial Bacula

2009-12-29 Thread togum

Howdy friends,

I'm using bacula in our companny backup system. But I always have a problem if 
our tape have been full. I'm using diffrensial bacula backup. If our tape full 
then i changed with the new tape, it will be full backup again and continue to 
do so.

Please help me!!!
Thanks advanced

+--
|This was sent by to...@softbless.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users