Re: [Bacula-users] Bacula 5.0.2 - Action On Purge = Truncate not working

2011-03-28 Thread Ondrej PLANKA (Ignum profile)

Dne 28.3.2011 16:43, Josh Fisher napsal(a):
 On 3/27/2011 11:31 AM, Ondrej PLANKA (Ignum profile) wrote:
   Ciao!

   same behavior in 5.0.3 version. File based Volumes are not truncated
   after console command purge volume action=all allpools storage=File 
 

   Any hints?

   Does it work with pool=poolname, rather than allpools?
 Unfortunately this is also not working, same result .
 The pool must have:
   Recycle = yes
   RecyclePool = some_pool
   ActionOnPurge = Truncate

 However, changing the pool configuration will only affect the creation
 of new volumes. Any volumes created before these changes are/were made
 to the pool must be updated manually using the update volume=some-volume

 command. You have volumes that seem not to be affected by the
 action=Truncate purge command. Most likely, those volumes have one of
 the above three settings incorrect.
Thanks Josh.
You are right, once I set up parameter Recycle = yes for given 
Volume/Volumes (update volume command) it was working, but I can 
observed that after manual purging of Volume/Volumes (commnad purge 
volume pool from console) one Volume got automatically status Recycle 
due to fact that parameter Recycle = yes is set up.

This kind of Volume is not consider for truncating (purge volume 
action=all allpools storage=File) because Volume has status Recycle and 
not Purged. Therefore file size of this Volume is still same.

Any hints?

Thanks, Ondrey.


--
Create and publish websites with WebMatrix
Use the most popular FREE web apps or write code yourself; 
WebMatrix provides all the features you need to develop and publish 
your website. http://p.sf.net/sfu/ms-webmatrix-sf
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.0.2 - Action On Purge = Truncate not working

2011-03-27 Thread Ondrej PLANKA (Ignum profile)
  Ciao!
 
  same behavior in 5.0.3 version. File based Volumes are not truncated
  after console command purge volume action=all allpools storage=File 
 
  Any hints?
 
  Does it work with pool=poolname, rather than allpools?
Unfortunately this is also not working, same result .
 
  Thanks, Ondrej.


--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula 5.0.2 - Action On Purge = Truncate not working

2011-03-27 Thread Ondrej PLANKA (Ignum profile)
Ciao!

of course I observed this behavior under Bacula 5.0.2 and 5.0.3 version.

B.rgds, Ondrej.

Dne 27.3.2011 22:09, John Drescher napsal(a):
 On Sun, Mar 27, 2011 at 11:31 AM, Ondrej PLANKA (Ignum profile)
 ondrej.pla...@ignum.cz  wrote:
 Ciao!
   
 same behavior in 5.0.3 version. File based Volumes are not truncated
 after console command purge volume action=all allpools storage=File 
   
 Any hints?
   
 Does it work with pool=poolname, rather than allpools?
 Unfortunately this is also not working, same result .
   

 If you are sure you are using 5.0.3 and not 5.0.0 I would file a bug report


 http://bugs.bacula.org/login_page.php

 John


 -
 Zpráva neobsahuje viry.
 Zkontrolováno AVG - www.avg.cz
 Verze: 10.0.1204 / Virová báze: 1498/3533 - Datum vydání: 27.3.2011





--
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula 5.0.2 - Action On Purge = Truncate not working

2011-03-20 Thread Ondrej PLANKA (Ignum profile)

Ciao!

guys do you have experience with new feature Truncate Volume after 
Purge  
(http://bacula.org/5.0.x-manuals/en/main/main/New_Features_in_5_0_1.html#SECTION0041)?


I tried to set up/applied manually from console (purge volume action=all 
allpools storage=File) under Bacula 5.0.2 but not working.


Volume (disk based) has status purged and set up ActionOnPurge=Truncate 
but after console command: purge volume action=all allpools storage=File/

/I got this message:
/This command can be DANGEROUS!!!

It purges (deletes) all Files from a Job,
JobId, Client or Volume; or it purges (deletes)
all Jobs from a Client or Volume without regard
to retention periods. Normally you should use the
PRUNE command, which respects retention periods.
No volume founds to perform all action(s)/

although volume should be candidate for truncate out of SAN storage (the 
SD should set the file to 0 bytes on disk) no action were performed.


Any hints/ideas what is wrong?

Thanks, Ondrey.



--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-31 Thread Ondrej PLANKA (Ignum profile)
Hello Henrik,

what are you using? MySQL?

Thanks, Ondrej.

'Mingus Dew' wrote:
Henrik,
Have you had any problems with slow queries during backup or restore
jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472
specifically, and considering that the bacula.File table already has 73
million rows in it and I haven't even successfully ran the big job
yet.

Not really.

We have several 10+ million file jobs - all run without problem (backup
and restore).

I am aware of the fact that a lot of Bacula users run PG  ( Bacula
Systems also does recommend PG for larger setups ) but nevertheless
MySQL has served us very well so far.


Just curious as a fellow Solaris deployer...

Thanks,
Shon

On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen
hen...@scannet.dkmailto:hen...@scannet.dk 
mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk wrote:
'Mingus Dew' wrote:
All,
I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
MySQL 4.1.22 for the database server. I do plan on upgrading to a
compatible version of MySQL 5, but migrating to PostgreSQL isn't an
option at this time.

I am trying to backup to tape a very large number of files for a
client. While the data size is manageable at around 2TB, the number of
files is incredibly large.
The first of the jobs had 27 million files and initially failed because
the batch table became Full. I changed the myisam_data_pointer size
to a value of 6 in the config.

This job was then able to run successfully and did not take too long.

I have another job which has 42 million files. I'm not sure what that
equates to in rows that need to be inserted, but I can say that I've
not been able to successfully run the job, as it seems to hang for
over 30 hours in a Dir inserting attributes status. This causes
other jobs to backup in the queue and once canceled I have to restart
Bacula.

I'm looking for way to boost performance of MySQL or Bacula (or both)
to get this job completed.

You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
way in hell that MySQL 4 + MyISAM is going to perform decent in your
situation.
Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
always available from http://www.mysql.com in the native pkg format so there 
really
is no excuse.

We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
perhaps I can give you some pointers.

Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

Since you are using Solaris 10 I assume that you are going to run MySQL
off ZFS - in that case you need to adjust the ZFS recordsize for the
filesystem that is going to hold your InnoDB datafiles to match the
InnoDB block size.

If you are using ZFS you should also consider getting yourself a fast
SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
terms of write / transaction speed.

If you have enough CPU power to spare you should try turning on
compression for the ZFS filesystem holding the datafiles - it also can
accelerate DB writes / reads but YMMV.

Lastly, our InnoDB related configuration from my.cnf :

# InnoDB options skip-innodb_doublewrite
innodb_data_home_dir = /tank/db/
innodb_log_group_home_dir = /tank/logs/
innodb_support_xa = false
innodb_file_per_table = true
innodb_buffer_pool_size = 20G
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 128M
innodb_log_file_size = 512M
innodb_log_files_in_group = 2
innodb_max_dirty_pages_pct = 90



Thanks,
Shon

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net 
mailto:bacula-us...@lists.sourceforge.net%3cmailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dkmailto:hen...@scannet.dk 
mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet


--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net mailto:Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Med venlig hilsen / Best 

Re: [Bacula-users] Tuning for large (millions of files) backups?

2010-10-31 Thread Ondrej PLANKA (Ignum profile)
Thanks :)
Which type of MySQL storage engine are you using? MyISAM or InnoDB for 
large Bacula system?
Can you please copy/paste your MySQL configuration? I mean my.cnf file

Thanks, Ondrej.


Henrik Johansen napsal(a):
 'Ondrej PLANKA (Ignum profile)' wrote:
   
 Hello Henrik,

 what are you using? MySQL?
 

 Yes - all our catalog servers run MySQL.

 I forgot to mention this in my last post - we are Bacula System
 customers and they have proved to very supportive and competent.

 If you are thinking about doing large scale backups with Bacula I can
 only encourage you to get a support subscription - it is worth every
 penny.


   
 Thanks, Ondrej.

 'Mingus Dew' wrote:
 
 Henrik,
 Have you had any problems with slow queries during backup or restore
 jobs? I'm thinking about http://bugs.bacula.org/view.php?id=1472
 specifically, and considering that the bacula.File table already has 73
 million rows in it and I haven't even successfully ran the big job
 yet.
   
 Not really.

 We have several 10+ million file jobs - all run without problem (backup
 and restore).

 I am aware of the fact that a lot of Bacula users run PG  ( Bacula
 Systems also does recommend PG for larger setups ) but nevertheless
 MySQL has served us very well so far.

 
 Just curious as a fellow Solaris deployer...

 Thanks,
 Shon

 On Fri, Oct 8, 2010 at 3:30 PM, Henrik Johansen
 hen...@scannet.dkmailto:hen...@scannet.dk 
 mailto:hen...@scannet.dk%3cmailto:hen...@scannet.dk wrote:
 'Mingus Dew' wrote:
 All,
 I am running Bacula 5.0.1 on Solaris 10 x86. I'm currently running
 MySQL 4.1.22 for the database server. I do plan on upgrading to a
 compatible version of MySQL 5, but migrating to PostgreSQL isn't an
 option at this time.

 I am trying to backup to tape a very large number of files for a
 client. While the data size is manageable at around 2TB, the number of
 files is incredibly large.
 The first of the jobs had 27 million files and initially failed because
 the batch table became Full. I changed the myisam_data_pointer size
 to a value of 6 in the config.

 This job was then able to run successfully and did not take too long.

 I have another job which has 42 million files. I'm not sure what that
 equates to in rows that need to be inserted, but I can say that I've
 not been able to successfully run the job, as it seems to hang for
 over 30 hours in a Dir inserting attributes status. This causes
 other jobs to backup in the queue and once canceled I have to restart
 Bacula.

 I'm looking for way to boost performance of MySQL or Bacula (or both)
 to get this job completed.

 You *really* need to upgrade to MySQL 5 and change to InnoDB - there is no
 way in hell that MySQL 4 + MyISAM is going to perform decent in your
 situation.
 Solaris 10 is a Tier 1 platform for MySQL so the latest versions are
 always available from http://www.mysql.com in the native pkg format so 
 there really
 is no excuse.

 We run our Bacula Catalog MySQl servers on Solaris (OpenSolaris) so
 perhaps I can give you some pointers.

 Our smallest Bacula DB is currently ~70 GB (381,230,610 rows).

 Since you are using Solaris 10 I assume that you are going to run MySQL
 off ZFS - in that case you need to adjust the ZFS recordsize for the
 filesystem that is going to hold your InnoDB datafiles to match the
 InnoDB block size.

 If you are using ZFS you should also consider getting yourself a fast
 SSD as a SLOG (or to disable the ZIL entirely if you dare) - all InnoDB
 writes to datafiles are O_SYNC and benefit *greatly* from an SSD in
 terms of write / transaction speed.

 If you have enough CPU power to spare you should try turning on
 compression for the ZFS filesystem holding the datafiles - it also can
 accelerate DB writes / reads but YMMV.

 Lastly, our InnoDB related configuration from my.cnf :

 # InnoDB options skip-innodb_doublewrite
 innodb_data_home_dir = /tank/db/
 innodb_log_group_home_dir = /tank/logs/
 innodb_support_xa = false
 innodb_file_per_table = true
 innodb_buffer_pool_size = 20G
 innodb_flush_log_at_trx_commit = 2
 innodb_log_buffer_size = 128M
 innodb_log_file_size = 512M
 innodb_log_files_in_group = 2
 innodb_max_dirty_pages_pct = 90



 Thanks,
 Shon

 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb

 ___
 Bacula-users mailing list
 Bacula-users@lists.sourceforge.netmailto:Bacula-users@lists.sourceforge.net
  
 mailto:bacula-us...@lists.sourceforge.net%3cmailto:Bacula-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


 --
 Med venlig hilsen / Best Regards

 Henrik Johansen
 hen...@scannet.dkmailto:hen

Re: [Bacula-users] Bacula 5.0.2 - Disappearing VSS snapshots producedby Windows

2010-07-09 Thread Ondrej PLANKA (Ignum profile)
Hello guys,

thanks for points. This is exactly what you wrote.

Thanks, Ondrey.

James Harper napsal(a):
 We are dealing with issue that time to time periodic VSS snapshots produced 
 by
 Windows scheduler are disappearing after Bacula backup (FULL or INC or DIFF,
 VSS enabled in the FileSet definition).

 Any idea about this sporadic disappearing of Windows VSS snapshots?

 

 Someone has already offered one possibility. Running low on disk space is 
 another possibility, but that would only result in one or two snapshots being 
 discarded.

 The windows event log should tell you what happened and why.

 James

   

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users