[Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-09-26 Thread Michael Neuendorf
Hello there,

I have a problem while backing up two windows servers in two different 
installations. The scenarios are almost equal:

- Bacula-dir (v5.0.1) on Ubuntu 10.04.3 virtualized with VMware vSphere 5 
Hypervisor
- Bacula-sd (v5.0.1) on same server with file storage on a NAS, mounted via 
iSCSI.
- Windows Server with problem in installation 1: Windows 2003 32bit SP1 
virtulized on different VMware vSphere 5 Hypervisor, Bacula-fd 5.2.3
- Windows Server with problem in installation 2: Windows 2008R2 SP1 64bit 
virtulized on same hypervisor, Bacula-fd 5.2.6
- Many other Servers (virtualized and physical), Windos or Linux, without any 
problems.

In both installations occur the errors shown below two to three times a week 
(not in all backups). I have three jobs per server and it is not always the 
same job, but always the same server.

What I have done:
- Use a newer FD version
- Just one concurrent job per server (Maximum Concurrent Jobs = 1 on all FDs)
- Set Heartbeat Interval = 60 on FD, SD and Dir

I hope, someone has a clue for me or a hint, where to troubleshoot further.

Best regards

Michael Neuendorf


The logs of the two server:
2012-09-19 22:58:45   bacula-dir JobId 13962: Start Backup JobId 13962, 
Job=nina_systemstate.2012-09-19_21.50.01_31
2012-09-19 22:58:46   bacula-dir JobId 13962: Using Device FileStorageLocal
2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
seconds, FD automatically compensating.
2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
seconds, FD automatically compensating.
2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
2012-09-19 23:03:40   bacula-dir JobId 13962: Sending Accurate information.
2012-09-19 23:05:12   bacula-dir-sd JobId 13962: Job write elapsed time = 
00:01:21, Transfer rate = 2.517 M Bytes/second
2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
C:/backup/bacula/systemstate.cmd cleanup
2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
C:/backup/bacula/systemstate.cmd cleanup
2012-09-19 23:05:17   bacula-dir JobId 13962: Fatal error: Network error with 
FD during Backup: ERR=Connection reset by peer 

2012-09-19 23:05:18   bacula-dir-sd JobId 13962: JobId=13962 
Job=nina_systemstate.2012-09-19_21.50.01_31 marked to be canceled.
2012-09-19 23:05:19   bacula-dir JobId 13962: Error: Bacula bacula-dir 5.0.1 
(24Feb10): 19-Sep-2012 23:05:19
  Build OS:   i486-pc-linux-gnu ubuntu 10.04
  JobId:  13962
  Job:nina_systemstate.2012-09-19_21.50.01_31
  Backup Level:   Incremental, since=2012-09-18 23:07:02
  Client: nina-fd 5.2.3 (16Dec11) Microsoft Windows Home 
ServerStandard Edition Service Pack 1 (build 3790),Cross-compile,Win32
  FileSet:nina_systemstate-set 2012-04-30 21:50:01
  Pool:   DailyLocal (From Job IncPool override)
  Catalog:MyCatalog (From Client resource)
  Storage:Local (From Pool resource)
  Scheduled time: 19-Sep-2012 21:50:01
  Start time: 19-Sep-2012 23:03:37
  End time:   19-Sep-2012 23:05:19
  Elapsed time:   1 min 42 secs
  Priority:   10
  FD Files Written:   4
  SD Files Written:   0
  FD Bytes Written:   203,957,393 (203.9 MB)
  SD Bytes Written:   0 (0 B)
  Rate:   1999.6 KB/s
  Software Compression:   59.0 %
  VSS:no
  Encryption: no
  Accurate:   yes
  Volume name(s): DailyLocal-0117
  Volume Session Id:  674
  Volume Session Time:1346772020
  Last Volume Bytes:  32,675,913,343 (32.67 GB)
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  Error
  SD termination status:  Running
  Termination:*** Backup Error ***

and

2012-09-24 23:50:02   bacula-dir JobId 4017: Start Sicherung JobId 4017, 
Job=BuHaSrv1_datev.2012-09-24_23.50.00_33
2012-09-24 23:50:02   bacula-dir JobId 4017: There are no more Jobs associated 
with Volume DailyLocal-0005. Marking it purged.
2012-09-24 23:50:02   bacula-dir JobId 4017: All records pruned from Volume 
DailyLocal-0005; marking it Purged
2012-09-24 23:50:02   bacula-dir JobId 4017: Recycled volume DailyLocal-0005
2012-09-24 23:50:02   bacula-dir JobId 4017: Using Device FileStorageLocal
2012-09-24 23:50:03   buhasrv1-fd JobId 4017: shell command: run 
ClientRunBeforeJob c:/backup/stopsql.cmd
2012-09-24 23:50:03   buhasrv1-fd JobId 4017: ClientRunBeforeJob:
2012-09-24 23:50:03   buhasrv1-fd JobId 4017: ClientRunBeforeJob: 
C:\Windows\system32net stop MSSQL$DATEV_SV_SE01
2012-09-24 23:50:08   buhasrv1-fd JobId 4017: ClientRunBeforeJob: SQL Server 
(DATEV_SV_SE01) wird beendet..
2012-09-24 23:50:08   

Re: [Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-09-26 Thread Josh Fisher

On 9/26/2012 7:45 AM, Michael Neuendorf wrote:
 Hello there,

 I have a problem while backing up two windows servers in two different 
 installations. The scenarios are almost equal:

 - Bacula-dir (v5.0.1) on Ubuntu 10.04.3 virtualized with VMware vSphere 5 
 Hypervisor
 - Bacula-sd (v5.0.1) on same server with file storage on a NAS, mounted via 
 iSCSI.
 - Windows Server with problem in installation 1: Windows 2003 32bit SP1 
 virtulized on different VMware vSphere 5 Hypervisor, Bacula-fd 5.2.3
 - Windows Server with problem in installation 2: Windows 2008R2 SP1 64bit 
 virtulized on same hypervisor, Bacula-fd 5.2.6
 - Many other Servers (virtualized and physical), Windos or Linux, without any 
 problems.

 In both installations occur the errors shown below two to three times a week 
 (not in all backups). I have three jobs per server and it is not always the 
 same job, but always the same server.

 What I have done:
 - Use a newer FD version
 - Just one concurrent job per server (Maximum Concurrent Jobs = 1 on all 
 FDs)
 - Set Heartbeat Interval = 60 on FD, SD and Dir

 I hope, someone has a clue for me or a hint, where to troubleshoot further.

The transfer rate is very low in the first server log. Perhaps there 
truly is a problem with a NIC in this hypervisor, or cable, switch, etc.


 Best regards

 Michael Neuendorf


 The logs of the two server:
 2012-09-19 22:58:45   bacula-dir JobId 13962: Start Backup JobId 13962, 
 Job=nina_systemstate.2012-09-19_21.50.01_31
 2012-09-19 22:58:46   bacula-dir JobId 13962: Using Device FileStorageLocal
 2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
 seconds, FD automatically compensating.
 2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
 seconds, FD automatically compensating.
 2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
 ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
 2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
 ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
 2012-09-19 23:03:40   bacula-dir JobId 13962: Sending Accurate information.
 2012-09-19 23:05:12   bacula-dir-sd JobId 13962: Job write elapsed time = 
 00:01:21, Transfer rate = 2.517 M Bytes/second
 2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
 C:/backup/bacula/systemstate.cmd cleanup
 2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
 C:/backup/bacula/systemstate.cmd cleanup
 2012-09-19 23:05:17   bacula-dir JobId 13962: Fatal error: Network error with 
 FD during Backup: ERR=Connection reset by peer

 2012-09-19 23:05:18   bacula-dir-sd JobId 13962: JobId=13962 
 Job=nina_systemstate.2012-09-19_21.50.01_31 marked to be canceled.
 2012-09-19 23:05:19   bacula-dir JobId 13962: Error: Bacula bacula-dir 5.0.1 
 (24Feb10): 19-Sep-2012 23:05:19
Build OS:   i486-pc-linux-gnu ubuntu 10.04
JobId:  13962
Job:nina_systemstate.2012-09-19_21.50.01_31
Backup Level:   Incremental, since=2012-09-18 23:07:02
Client: nina-fd 5.2.3 (16Dec11) Microsoft Windows Home 
 ServerStandard Edition Service Pack 1 (build 3790),Cross-compile,Win32
FileSet:nina_systemstate-set 2012-04-30 21:50:01
Pool:   DailyLocal (From Job IncPool override)
Catalog:MyCatalog (From Client resource)
Storage:Local (From Pool resource)
Scheduled time: 19-Sep-2012 21:50:01
Start time: 19-Sep-2012 23:03:37
End time:   19-Sep-2012 23:05:19
Elapsed time:   1 min 42 secs
Priority:   10
FD Files Written:   4
SD Files Written:   0
FD Bytes Written:   203,957,393 (203.9 MB)
SD Bytes Written:   0 (0 B)
Rate:   1999.6 KB/s
Software Compression:   59.0 %
VSS:no
Encryption: no
Accurate:   yes
Volume name(s): DailyLocal-0117
Volume Session Id:  674
Volume Session Time:1346772020
Last Volume Bytes:  32,675,913,343 (32.67 GB)
Non-fatal FD errors:0
SD Errors:  0
FD termination status:  Error
SD termination status:  Running
Termination:*** Backup Error ***

 and

 2012-09-24 23:50:02   bacula-dir JobId 4017: Start Sicherung JobId 4017, 
 Job=BuHaSrv1_datev.2012-09-24_23.50.00_33
 2012-09-24 23:50:02   bacula-dir JobId 4017: There are no more Jobs 
 associated with Volume DailyLocal-0005. Marking it purged.
 2012-09-24 23:50:02   bacula-dir JobId 4017: All records pruned from Volume 
 DailyLocal-0005; marking it Purged
 2012-09-24 23:50:02   bacula-dir JobId 4017: Recycled volume DailyLocal-0005
 2012-09-24 23:50:02   bacula-dir JobId 4017: Using Device FileStorageLocal
 2012-09-24 23:50:03   buhasrv1-fd JobId 4017: shell command: run 
 ClientRunBeforeJob 

Re: [Bacula-users] Network error with FD during Backup: ERR=Connection reset by peer

2012-09-26 Thread Thomas Lohman

 2012-09-19 22:58:45   bacula-dir JobId 13962: Start Backup JobId 13962, 
 Job=nina_systemstate.2012-09-19_21.50.01_31
 2012-09-19 22:58:46   bacula-dir JobId 13962: Using Device FileStorageLocal
 2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
 seconds, FD automatically compensating.
 2012-09-19 23:02:41   nina-fd JobId 13962: DIR and FD clocks differ by 233 
 seconds, FD automatically compensating.
 2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
 ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
 2012-09-19 23:02:45   nina-fd JobId 13962: shell command: run 
 ClientRunBeforeJob C:/backup/bacula/systemstate.cmd
 2012-09-19 23:03:40   bacula-dir JobId 13962: Sending Accurate information.
 2012-09-19 23:05:12   bacula-dir-sd JobId 13962: Job write elapsed time = 
 00:01:21, Transfer rate = 2.517 M Bytes/second
 2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
 C:/backup/bacula/systemstate.cmd cleanup
 2012-09-19 23:09:06   nina-fd JobId 13962: shell command: run ClientAfterJob 
 C:/backup/bacula/systemstate.cmd cleanup
 2012-09-19 23:05:17   bacula-dir JobId 13962: Fatal error: Network error with 
 FD during Backup: ERR=Connection reset by peer

We have seen that same error (Connection reset by peer) ocassionally 
for many months.  Some are normal - Mac/Windows desktops/laptops that 
either get rebooted or removed from the network during a backup, etc. 
But sometimes we see this error with UNIX servers that are up 24x7.  We 
suspect that it is network related since we've had similar errors with 
print servers and non-Bacula backup servers.  But we have yet to pin it 
down.  We restart failed jobs in Bacula so typically the job always 
completes OK even after initially getting this error on the first try. 
I'd be curious to know if others get these errors occasionally and what 
version of Bacula that you're running.


--tom



--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] List Volumes with a Full Backup in it

2012-09-26 Thread Summers, James B. II
Hello All,

I need to build a list of my file storage volumes that contain a full backup. 
 Which I will then take that list and use to make an offsite longterm archive.

I could not find a way to do it efficiently with the bls program, so I turned 
to using a sql script to do it.  Here is what I have written:
--
select distinct(jobmedia.mediaid)
from job, jobmedia
where level = 'F' 
   and name != 'BackupCatalog'
   and name != 'RestoreFiles'
   and job.jobid = jobmedia.jobid
order by jobmedia.mediaid
;
--

My volumes are actually named Vol and the mediaid is an integer but it 
seems to sequence with what I am seeing in my email reports.

Does anyone know if the sql above is correct for what I am trying to get?

I would also only like to get the volumes where the job finished successfully.  
Looking at the jobstatus column in the job table, almost all are T, but I did 
see two records that have an E in the jobstatus column.

Does anyone know the description of the codes for jobstatus? 


TIA
Jim




--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula backup from alternative datastore

2012-09-26 Thread shockwavecs
We have two NAS boxes that sync all data over DRBD. When NAS1 goes down I want 
NAS2 to automatically become the system to backup to tape. 

If NAS1 is up, then backup NAS1, else NAS2 becomes backup source for NAS1. 

Is this possible? Any reason not to? DRBD is in master-slave config so data 
ishould/i never be served from two places at once.

+--
|This was sent by shockwav...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how slow your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219672;13503038;z?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup from alternative datastore

2012-09-26 Thread Ana Romero
Hello schockwavecs, you have a name very dificult 

Well, it this possible!, you have to install the agent bacula-fd in both NAS1 
as NAS2, but the client in bacula-dir must have configured the cluster IP. 

This is: 

Client bacula-fd in NAS1 


FileDaemon { # this is me 
Name = nas01-fd 
FDport = 9102 # where we listen for the director 
WorkingDirectory = C:\\Program Files\\Bacula\\working 
Pid Directory = C:\\Program Files\\Bacula\\working 
Maximum Concurrent Jobs = 10 
} 
# List Directors who are permitted to contact this File daemon 
# 
Director { 
Name = DIRECTOR-dir 
Password = x 



Client bacula-fd in NAS2 


blockquote
FileDaemon { # this is me 
Name = nas02-fd 
FDport = 9102 # where we listen for the director 
WorkingDirectory = C:\\Program Files\\Bacula\\working 
Pid Directory = C:\\Program Files\\Bacula\\working 
Maximum Concurrent Jobs = 10 
} 
# List Directors who are permitted to contact this File daemon 
# 
Director { 
Name = DIRECTOR-dir 
Password = x 

/blockquote

Client nas-fd in bacula-dir 

blockquote
Client { 
Name = nas-fd 
Address = nas.domain.local (or the cluster IP) 
FDPort = 9102 
Catalog = MyCatalog 
Password = x # password for FileDaemon 
AutoPrune = yes # Prune expired Jobs/Files 
Maximum Concurrent Jobs = 10 
} 

/blockquote
Regards 
Ana 
- Mensaje original -

De: shockwavecs bacula-fo...@backupcentral.com 
Para: bacula-users@lists.sourceforge.net 
Enviados: MiƩrcoles, 26 de Septiembre 2012 17:13:18 
Asunto: [Bacula-users] Bacula backup from alternative datastore 

We have two NAS boxes that sync all data over DRBD. When NAS1 goes down I want 
NAS2 to automatically become the system to backup to tape. 

If NAS1 is up, then backup NAS1, else NAS2 becomes backup source for NAS1. 

Is this possible? Any reason not to? DRBD is in master-slave config so data 
ishould/i never be served from two places at once. 

+-- 
|This was sent by shockwav...@gmail.com via Backup Central. 
|Forward SPAM to ab...@backupcentral.com. 
+-- 



-- 
How fast is your code? 
3 out of 4 devs don\\\'t know how their code performs in production. 
Find out how slow your code is with AppDynamics Lite. 
http://ad.doubleclick.net/clk;262219672;13503038;z? 
http://info.appdynamics.com/FreeJavaPerformanceDownload.html 
___ 
Bacula-users mailing list 
Bacula-users@lists.sourceforge.net 
https://lists.sourceforge.net/lists/listinfo/bacula-users 

--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how slow your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219672;13503038;z?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula backup from alternative datastore

2012-09-26 Thread Josh Fisher

On 9/26/2012 11:13 AM, shockwavecs wrote:
 We have two NAS boxes that sync all data over DRBD. When NAS1 goes down I 
 want NAS2 to automatically become the system to backup to tape.

 If NAS1 is up, then backup NAS1, else NAS2 becomes backup source for NAS1.

 Is this possible? Any reason not to? DRBD is in master-slave config so data 
 ishould/i never be served from two places at once.

This is beyond the scope of Bacula. Basically, use Corosync/Pacemaker 
(see http://www.clusterlabs.org/) to setup NAS as a high availability 
cluster service with NAS1 and NAS2 as the cluster nodes. The DRBD 
storage, bacula-fd, and a virtual IP address will be under cluster 
control, meaning all will run on only one server at a time and the IP is 
only assigned to one server at a time.  Alternatively, setup a single 
virtual machine on the DRBD storage that also runs bacula-fd, then use 
Corosync/Pacemaker to run the VM on only one node at a time. Either way, 
it just looks like a single NAS box with a single IP address to Bacula 
and doesn't require anything fancy from Bacula.


--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how slow your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219672;13503038;z?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity (variable?)

2012-09-26 Thread Stephen Thompson
On 09/25/2012 02:29 PM, Cejka Rudolf wrote:
 Stephen Thompson wrote (2012/09/25):
 The tape in question have only been used once or twice.

 Do you mean just one or two drive loads and unloads?


Yes, I mean the tapes have only been in a drive once or twice, possibly 
for a dozen sequential jobs while in the drive, but only in and out of 
the drive once or twice.

I have seen this 200-300Gb capacity on new tapes as well as used.

I see it in both my SL500 library as well as my C4 library, which is a 
combined 4 LTO3 drives (2 in each library).


 The library is a StorageTek whose SLConsole reports no media (or drive)
 errors, though I will look into those linux-based tools.

 There are several types of errors, recoverable and non-recoverable, and
 I'm afraid that you see just non-recoverable, but it is too late to see
 them.

 Our Sun/Oracle service engineer claims that our drives do not require
 cleaning tapes.  Does that sound legit?

 If you are interested, you can study
 http://www.tarconis.com/documentos/LTO_Cleaning_wp.pdf ;o)
 So in HP case, it is possible to agree. However, you still
 have to have atleast one cleaning cartridge prepared ;o)

 Our throughput is pretty reasonable for our hardware -- we do use disk
 staging and get something like 60Mb/s to tape.

 HP LTO-3 drive can slow down physical speed to 27 MB/s, IBM LTO-3
 to 40 MB/s. Native speed is 80 MB/s, bot all these speeds are after
 compression. If you have 60 MB/s before compression and there are
 some places with somewhat better compression than 2:1, then you are not
 able to feed HP LTO-3. For IBM drive, it is suffucient to have places
 with just 2:1 to need repositions.

 Lastly, the tapes that get 200 vs 800 are from the same batch of tapes,
 same number of uses, and used by the same pair of SL500 drives.  That's
 primarily why I wondered if it could be data dependent (or a bacula bug).

 And what about the reason to switch to the next tape? Do you have something
 like this in your reports?

 22-Sep 02:22 backup-sd JobId 74990: End of Volume 1 at 95:46412 on device 
 drive0 (/dev/nsa0). Write of 65536 bytes got 0.
 22-Sep 02:22 backup-sd JobId 74990: Re-read of last block succeeded.
 22-Sep 02:22 backup-sd JobId 74990: End of medium on Volume 1 
 Bytes=381,238,317,056 Blocks=5,817,238 at 22-Sep-2012 02:22.


Here's an example of a tape that had one job and only wrote ~278Gb to 
the tape:

10-Sep 10:08 sd-SL500 JobId 256773: Recycled volume FB0095 on device 
SL500-Drive-1 (/dev/SL500-Drive-1), all previous data lost.
10-Sep 10:08 sd-SL500 JobId 256773: New volume FB0095 mounted on 
device SL500-Drive-1 (/dev/SL500-Drive-1) at 10-Sep-2012 10:08.
10-Sep 13:02 sd-SL500 JobId 256773: End of Volume FB0095 at 149:5906 
on device SL500-Drive-1 (/dev/SL500-Drive-1). Write of 262144 bytes 
got -1.
10-Sep 13:02 sd-SL500 JobId 256773: Re-read of last block succeeded.
10-Sep 13:02 sd-SL500 JobId 256773: End of medium on Volume FB0095 
Bytes=299,532,813,312 Blocks=1,142,627 at 10-Sep-2012 13:02.


 Do not you use something from the following things in bacula configuration?
  UseVolumeOnce
  Maximum Volume Jobs
  Maximum Volume Bytes
  Volume Use Duration
 ?


No, none of those are configured.


Stephen
-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how slow your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219672;13503038;z?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO3 tape capacity (variable?)

2012-09-26 Thread Stephen Thompson
On 09/26/2012 02:35 PM, Stephen Thompson wrote:
 On 09/25/2012 02:29 PM, Cejka Rudolf wrote:
 Stephen Thompson wrote (2012/09/25):
 The tape in question have only been used once or twice.

 Do you mean just one or two drive loads and unloads?


 Yes, I mean the tapes have only been in a drive once or twice, possibly
 for a dozen sequential jobs while in the drive, but only in and out of
 the drive once or twice.

 I have seen this 200-300Gb capacity on new tapes as well as used.


I think I pointed this out before, but I also have used and new tapes 
with 400-800Gb on them.  It seems really hit or miss, though the tapes 
with 400Gb or less are probably a 1/3 of my tapes.  The other 2/3 have 
above 400Gb.



 I see it in both my SL500 library as well as my C4 library, which is a
 combined 4 LTO3 drives (2 in each library).


 The library is a StorageTek whose SLConsole reports no media (or drive)
 errors, though I will look into those linux-based tools.

 There are several types of errors, recoverable and non-recoverable, and
 I'm afraid that you see just non-recoverable, but it is too late to see
 them.

 Our Sun/Oracle service engineer claims that our drives do not require
 cleaning tapes.  Does that sound legit?

 If you are interested, you can study
 http://www.tarconis.com/documentos/LTO_Cleaning_wp.pdf ;o)
 So in HP case, it is possible to agree. However, you still
 have to have atleast one cleaning cartridge prepared ;o)

 Our throughput is pretty reasonable for our hardware -- we do use disk
 staging and get something like 60Mb/s to tape.

 HP LTO-3 drive can slow down physical speed to 27 MB/s, IBM LTO-3
 to 40 MB/s. Native speed is 80 MB/s, bot all these speeds are after
 compression. If you have 60 MB/s before compression and there are
 some places with somewhat better compression than 2:1, then you are not
 able to feed HP LTO-3. For IBM drive, it is suffucient to have places
 with just 2:1 to need repositions.

 Lastly, the tapes that get 200 vs 800 are from the same batch of tapes,
 same number of uses, and used by the same pair of SL500 drives.  That's
 primarily why I wondered if it could be data dependent (or a bacula bug).

 And what about the reason to switch to the next tape? Do you have something
 like this in your reports?

 22-Sep 02:22 backup-sd JobId 74990: End of Volume 1 at 95:46412 on device 
 drive0 (/dev/nsa0). Write of 65536 bytes got 0.
 22-Sep 02:22 backup-sd JobId 74990: Re-read of last block succeeded.
 22-Sep 02:22 backup-sd JobId 74990: End of medium on Volume 1 
 Bytes=381,238,317,056 Blocks=5,817,238 at 22-Sep-2012 02:22.


 Here's an example of a tape that had one job and only wrote ~278Gb to
 the tape:

 10-Sep 10:08 sd-SL500 JobId 256773: Recycled volume FB0095 on device
 SL500-Drive-1 (/dev/SL500-Drive-1), all previous data lost.
 10-Sep 10:08 sd-SL500 JobId 256773: New volume FB0095 mounted on
 device SL500-Drive-1 (/dev/SL500-Drive-1) at 10-Sep-2012 10:08.
 10-Sep 13:02 sd-SL500 JobId 256773: End of Volume FB0095 at 149:5906
 on device SL500-Drive-1 (/dev/SL500-Drive-1). Write of 262144 bytes
 got -1.
 10-Sep 13:02 sd-SL500 JobId 256773: Re-read of last block succeeded.
 10-Sep 13:02 sd-SL500 JobId 256773: End of medium on Volume FB0095
 Bytes=299,532,813,312 Blocks=1,142,627 at 10-Sep-2012 13:02.


 Do not you use something from the following things in bacula configuration?
   UseVolumeOnce
   Maximum Volume Jobs
   Maximum Volume Bytes
   Volume Use Duration
 ?


 No, none of those are configured.


 Stephen



-- 
Stephen Thompson   Berkeley Seismological Laboratory
step...@seismo.berkeley.edu215 McCone Hall # 4760
404.538.7077 (phone)   University of California, Berkeley
510.643.5811 (fax) Berkeley, CA 94720-4760

--
How fast is your code?
3 out of 4 devs don\\\'t know how their code performs in production.
Find out how slow your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219672;13503038;z?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users