Re: [Bacula-users] Windows Junction

2011-10-07 Thread Jeremy Maes

Op 7/10/2011 7:46, glynd schreef:

I have tried to get rid of the errors by adding an exclude section in the 
dir.conf. I have failed. Can someone help me please?

Here is the error

05-Oct 09:07 glyn-laptop-fd JobId 4155: Generate VSS snapshots. Driver=VSS Vista, 
Drive(s)=C
05-Oct 09:08 glyn-laptop-fd JobId 4155:  
C:/users/Glyn/AppData/Local/Application Data is a junction point or a different 
filesystem. Will not descend from C:/users/Glyn into it.

Here is the fileset in the dir.conf

FileSet {
   Name = Glyn Set
   Enable VSS = yes
   Ignore FileSet Changes = yes

   Include {
 Options {
 wilddir = C:/Users/Glyn/AppData
 wilddir = C:/Users/Glyn/Application Data
 wilddir = C:/Users/Glyn/Cookies
 wilddir = C:/Users/Glyn/Documents/My Music
 wilddir = C:/Users/Glyn/Documents/My Pictures
 wilddir = C:/Users/Glyn/Documents/My Videos
 wilddir = C:/Users/Public/Documents/My Music
 wilddir = C:/Users/Public/Documents/My Pictures
 wilddir = C:/Users/Public/Documents/My Videos
 wilddir = C:/Users/Glyn/Local Settings
 wilddir = C:/Users/Glyn/My Documents
 wilddir = C:/Users/Glyn/NetHood
 wilddir = C:/Users/Glyn/PrintHood
 wilddir = C:/Users/Glyn/Recent
 wilddir = C:/Users/Glyn/SendTo
 wilddir = C:/Users/Glyn/Start Menu
 wilddir = C:/Users/Glyn/Templates  
 wilddir = c:/users/glyn/.VirtualBox/
 exclude = yes
   }
 Options {
 Compression = GZIP
 ignore case = yes;
 verify = pnugsi
 }

 File = /etc/bacula/Glynbup.txt
  }
}

2 thoughts:
- You're using wilddir directives, but there's nothing wild to your 
excludes (no wildcards), though I'm guessing this isn't the issue, this is:
- The warning you get is for a folder you *haven't* excluded. You 
excluded C:/*U*sers/Glyn/AppData but the warning is for a subdir of 
C:/*u*sers/Glyn/AppData. Since you didn't put an ignore case = yes 
directive in that options block you have to watch case sensitivity.


Regards,
Jeremy

 DISCLAIMER 
http://www.schaubroeck.be/maildisclaimer.htm
--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Windows Junction

2011-10-07 Thread Devon Dunham
Use the 64 bit client I think that is happening because Application Data is
hidden somehow. Search for the Windows default file set or try using regex.

-Original Message-
From: glynd [mailto:bacula-fo...@backupcentral.com] 
Sent: Friday, October 07, 2011 12:47 AM
To: bacula-users@lists.sourceforge.net
Subject: [Bacula-users] Windows Junction

I have tried to get rid of the errors by adding an exclude section in the
dir.conf. I have failed. Can someone help me please? 

Here is the error 

05-Oct 09:07 glyn-laptop-fd JobId 4155: Generate VSS snapshots. Driver=VSS
Vista, Drive(s)=C 
05-Oct 09:08 glyn-laptop-fd JobId 4155:
C:/users/Glyn/AppData/Local/Application Data is a junction point or a
different filesystem. Will not descend from C:/users/Glyn into it. 

Here is the fileset in the dir.conf 

FileSet {
  Name = Glyn Set 
  Enable VSS = yes
  Ignore FileSet Changes = yes 

  Include { 
Options { 
wilddir = C:/Users/Glyn/AppData 
wilddir = C:/Users/Glyn/Application Data 
wilddir = C:/Users/Glyn/Cookies 
wilddir = C:/Users/Glyn/Documents/My Music 
wilddir = C:/Users/Glyn/Documents/My Pictures 
wilddir = C:/Users/Glyn/Documents/My Videos 
wilddir = C:/Users/Public/Documents/My Music 
wilddir = C:/Users/Public/Documents/My Pictures 
wilddir = C:/Users/Public/Documents/My Videos 
wilddir = C:/Users/Glyn/Local Settings 
wilddir = C:/Users/Glyn/My Documents 
wilddir = C:/Users/Glyn/NetHood 
wilddir = C:/Users/Glyn/PrintHood 
wilddir = C:/Users/Glyn/Recent 
wilddir = C:/Users/Glyn/SendTo 
wilddir = C:/Users/Glyn/Start Menu 
wilddir = C:/Users/Glyn/Templates 
wilddir = c:/users/glyn/.VirtualBox/ 
exclude = yes 
  } 
Options { 
Compression = GZIP 
ignore case = yes; 
verify = pnugsi 
} 

File = /etc/bacula/Glynbup.txt 
 }
} 

And for completeness, here is the Glynbup.txt which supplies the list of
files to be backed up 


C:/users/Glyn
C:/chqdata
C:/Garmin
C:/Mail 


TIA
Glyn

+--
|This was sent by g...@cirrus.co.za via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Windows Junction

2011-10-07 Thread glynd
Thanks Jeremy, that was it was, ignore case = yes.

Cheers
Glyn

+--
|This was sent by g...@cirrus.co.za via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] query for file sizes in a job

2011-10-07 Thread John Drescher
2011/10/6 Jeff Shanholtz jeffs...@shanholtz.com:
 I’m currently tuning my exclude rules and one of the things I want to do is
 make sure I’m not backing up any massive files that don’t need to be backed
 up. Is there any way to get bacula to list file sizes along with the file
 names since llist doesn’t do this?


Google search for bacula base64

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] query for file sizes in a job

2011-10-07 Thread Jeff Shanholtz
I appreciate that, but either you misunderstood what I'm trying to do or I
just can't seem to make sense of the search results I'm getting as they
apply to my issue. I did see one web page that decodes the base64 string
from a member of this mailing list, but that operates on a single base64
string, not on a whole job (and even if it did, I don't know how to get
bacula to tell me the base64 strings).

I want to either get a full list of files from a job complete with file
sizes so I can sort on the file sizes, or query for files greater than a
certain size. I also probably should have mentioned that I'm stuck on Bacula
v3.03 because it runs on a windows server.

Could you be a little more specific on what kind of answer I'm looking for
in the google results?

Thanks!

-Original Message-
From: John Drescher [mailto:dresche...@gmail.com] 
Sent: Friday, October 07, 2011 6:12 AM
To: Jeff Shanholtz; bacula-users
Subject: Re: [Bacula-users] query for file sizes in a job

2011/10/6 Jeff Shanholtz jeffs...@shanholtz.com:
 I'm currently tuning my exclude rules and one of the things I want to 
 do is make sure I'm not backing up any massive files that don't need 
 to be backed up. Is there any way to get bacula to list file sizes 
 along with the file names since llist doesn't do this?


Google search for bacula base64

John


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] query for file sizes in a job

2011-10-07 Thread John Drescher
On Fri, Oct 7, 2011 at 12:51 PM, Jeff Shanholtz jeffs...@shanholtz.com wrote:
 I appreciate that, but either you misunderstood what I'm trying to do or I
 just can't seem to make sense of the search results I'm getting as they
 apply to my issue. I did see one web page that decodes the base64 string
 from a member of this mailing list, but that operates on a single base64
 string, not on a whole job (and even if it did, I don't know how to get
 bacula to tell me the base64 strings).

 I want to either get a full list of files from a job complete with file
 sizes so I can sort on the file sizes, or query for files greater than a
 certain size. I also probably should have mentioned that I'm stuck on Bacula
 v3.03 because it runs on a windows server.

 Could you be a little more specific on what kind of answer I'm looking for
 in the google results?


I believe you need to write a query that for every file it decodes the
base64 strings. I remember this discussion although it has been a long
time so I do not remember the details. I would normally try to track
this down and help you out however I am swamped so for now this is all
I can do..

John

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] query for file sizes in a job

2011-10-07 Thread Stuart McGraw
On 10/06/2011 12:36 PM, Jeff Shanholtz wrote:
 I’m currently tuning my exclude rules and one of the things I 
 want to do is make sure I’m not backing up any massive files
 that don’t need to be backed up. Is there any way to get bacula
 to list file sizes along with the file names since llist doesn’t
 do this?

The filesize and other file attributes are stored in 
(psuedo?-)base-64 encoded form in the lstat field of the 
'file' table of the catalog database.

I ran into the same problem and, since I'm using Postgresql
for my catalogs, wrote a little pg extension function in C 
that is called with an lstat value and the index number of 
the stat field wanted.  This is used as a base to define 
some one-line convenience functions like lstat_size(text), 
lstat_mtime(text), etc, which then allows one to define 
views like:

   CREATE VIEW v_files AS (
SELECT f.fileid,
   f.jobid,
   CASE fileindex WHEN 0 THEN 'X' ELSE ' ' END AS del,
   lstat_size (lstat) AS size,
   TIMESTAMP WITH TIME ZONE 'epoch' + lstat_mtime (lstat) * 
INTERVAL '1 second' AS mtime,
   p.path||n.name AS filename
FROM file f
JOIN path p ON p.pathid=f.pathid
JOIN filename n ON n.filenameid=f.filenameid);

which generates results like:

SELECT * FROM v_files WHERE ...whatever...;

 fileid  | jobid | del |   size   | mtime  | filename   

-+---+-+--++
 2155605 |  1750 | |39656 | 2011-10-06 21:18:17-06 | 
/srv/backup/files-sdb1.txt
 2155606 |  1750 | | 4096 | 2011-10-06 21:18:35-06 | /srv/backup/
 2155607 |  1750 | X   |0 | 2011-10-05 19:59:34-06 | 
/home/stuart/Maildir/new/1317866374.V803I580003M622752.soga.home
 2155571 |  1749 | | 39553788 | 2011-10-05 21:24:16-06 | 
/var/spool/bacula/bacula.dmp
 2155565 |  1748 | |39424 | 2011-10-05 20:24:49-06 | c:/stuart/pmt.xls
 2155566 |  1748 | | 1365 | 2011-10-05 21:22:42-06 | 
c:/Local/bacula/data/pg_global.sql
 2155567 |  1748 | | 45197314 | 2011-10-05 21:23:07-06 | 
c:/Local/bacula/data/pg_jmdict.dmp

I've found it very convenient and will be happy to
pass it on to anyone interested but have to add a 
disclaimer is that this was the first time I've used
C in 20 years, first time I ever wrote a PG extension
function and first time I ever looked at the Bacula 
source code, so be warned. :-)

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] query for file sizes in a job

2011-10-07 Thread Christian Manal
Am 07.10.2011 19:43, schrieb John Drescher:
 On Fri, Oct 7, 2011 at 12:51 PM, Jeff Shanholtz jeffs...@shanholtz.com 
 wrote:
 I appreciate that, but either you misunderstood what I'm trying to do or I
 just can't seem to make sense of the search results I'm getting as they
 apply to my issue. I did see one web page that decodes the base64 string
 from a member of this mailing list, but that operates on a single base64
 string, not on a whole job (and even if it did, I don't know how to get
 bacula to tell me the base64 strings).

 I want to either get a full list of files from a job complete with file
 sizes so I can sort on the file sizes, or query for files greater than a
 certain size. I also probably should have mentioned that I'm stuck on Bacula
 v3.03 because it runs on a windows server.

 Could you be a little more specific on what kind of answer I'm looking for
 in the google results?

 
 I believe you need to write a query that for every file it decodes the
 base64 strings. I remember this discussion although it has been a long
 time so I do not remember the details. I would normally try to track
 this down and help you out however I am swamped so for now this is all
 I can do..
 
 John

You are correct. There is a field called 'lstat' in the 'file' table
that contains base64 encoded file attributes. The file size is somewhere
in there. I think the function in the bacula source to decode that
base64 string is called 'decode_stat' (don't know where it sits exactly;
grep should help).


Regards,
Christian Manal

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] query for file sizes in a job

2011-10-07 Thread Jeff Shanholtz
Thanks guys. I'm pretty sure I'm using sqlite (having a hard time
determining that definitively, but I don't think I did anything from an
installation point of view beyond just installing bacula). I assume this
script is postgresql specific. Looks like the fastest option for me is going
to be to simply search the drives of my 3 client systems for large files and
then check to see if any of those files are being backed up when they don't
need to be.

-Original Message-
From: Stuart McGraw [mailto:smcg4...@frii.com] 
Sent: Friday, October 07, 2011 10:30 AM
To: Bacula-users@lists.sourceforge.net
Subject: Re: [Bacula-users] query for file sizes in a job

On 10/06/2011 12:36 PM, Jeff Shanholtz wrote:
 I'm currently tuning my exclude rules and one of the things I want to 
 do is make sure I'm not backing up any massive files that don't need 
 to be backed up. Is there any way to get bacula to list file sizes 
 along with the file names since llist doesn't do this?

The filesize and other file attributes are stored in
(psuedo?-)base-64 encoded form in the lstat field of the 'file' table of the
catalog database.

I ran into the same problem and, since I'm using Postgresql for my catalogs,
wrote a little pg extension function in C that is called with an lstat value
and the index number of the stat field wanted.  This is used as a base to
define some one-line convenience functions like lstat_size(text),
lstat_mtime(text), etc, which then allows one to define views like:

   CREATE VIEW v_files AS (
SELECT f.fileid,
   f.jobid,
   CASE fileindex WHEN 0 THEN 'X' ELSE ' ' END AS del,
   lstat_size (lstat) AS size,
   TIMESTAMP WITH TIME ZONE 'epoch' + lstat_mtime (lstat) *
INTERVAL '1 second' AS mtime,
   p.path||n.name AS filename
FROM file f
JOIN path p ON p.pathid=f.pathid
JOIN filename n ON n.filenameid=f.filenameid);

which generates results like:

SELECT * FROM v_files WHERE ...whatever...;

 fileid  | jobid | del |   size   | mtime  | filename

-+---+-+--++
-+---+-+--++
-+---+-+--++
 2155605 |  1750 | |39656 | 2011-10-06 21:18:17-06 |
/srv/backup/files-sdb1.txt
 2155606 |  1750 | | 4096 | 2011-10-06 21:18:35-06 | /srv/backup/
 2155607 |  1750 | X   |0 | 2011-10-05 19:59:34-06 |
/home/stuart/Maildir/new/1317866374.V803I580003M622752.soga.home
 2155571 |  1749 | | 39553788 | 2011-10-05 21:24:16-06 |
/var/spool/bacula/bacula.dmp
 2155565 |  1748 | |39424 | 2011-10-05 20:24:49-06 |
c:/stuart/pmt.xls
 2155566 |  1748 | | 1365 | 2011-10-05 21:22:42-06 |
c:/Local/bacula/data/pg_global.sql
 2155567 |  1748 | | 45197314 | 2011-10-05 21:23:07-06 |
c:/Local/bacula/data/pg_jmdict.dmp

I've found it very convenient and will be happy to pass it on to anyone
interested but have to add a disclaimer is that this was the first time I've
used C in 20 years, first time I ever wrote a PG extension function and
first time I ever looked at the Bacula source code, so be warned. :-)


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restore taking a long time....

2011-10-07 Thread Mike Eggleston
Hi,

I have a server crash and am working on the restore. I have one specific
section of the original file system I'm attempting to restore. This is
section is about 40GB. Luckily I had a recent full before the crash. When
I enter 'restore' and give the jobid bconsole beings to create the
synthetic file system so I can select what I want restored. My problem
is this step of creating the synthetic file system takes a massivly
long time. I've started and killed the restore several times. One time
I waited nearly two days for the synthetic file system to be created.

Is there a way on the command line I can simply say something like
restore client=server-fd file=/opt/perforce/depot/depot/application
recurse=yes?

The server that runs the bacula server has plenty of disk space and 1GB
of memory. After this experience I would like to increase the memory
and to run bacula on a processor better than the Celeron.

Mike

--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore taking a long time....

2011-10-07 Thread Ben Walton
Excerpts from Mike Eggleston's message of Fri Oct 07 14:03:18 -0400 2011:

 The server that runs the bacula server has plenty of disk space and
 1GB of memory. After this experience I would like to increase the
 memory and to run bacula on a processor better than the Celeron.

Although you noted that you've got 40G of data, that's not a good
metric in this instance.  Is this 40G of Maildir mail folders or 40G
of blu-ray movie rips?  The number of files and directories will be a
more useful number to look at here.

What is your system doing while the synthetic view is being built?  Is
it paging to disk?  Is the load high?  What is mysql doing (strace)?
What is bacula doing (strace)?  Is there anything else happening on
the system while you're doing this?  Is the mysql database on a volume
(physical spindles) with other things that are still being worked hard
during this action?

Thanks
-Ben
--
Ben Walton
Systems Programmer - CHASS
University of Toronto
C:416.407.5610 | W:416.978.4302


--
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users