Re: [BackupPC-users] Best Appoach for MySQL Backups

2014-09-08 Thread Pedro M. S. Oliveira

Hi!

I like to use mysql-zrm, this will create consistent mysql backups that 
are later fetched by BackupPC.
Depending on your application and backups needs you can even trigger the 
mysql-zrm backup from backuppc.


Cheers,
Pedro



--
--
Pedro M. S. Oliveira
IT Consultant   
Email: pmsolive...@gmail.com
URL: http://www.linux-geex.com
--


On 09/08/2014 04:09 PM, Francisco Suarez wrote:
What would be a good approach for backing up MySQL databases on hosts 
with BackupPC?



--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best Appoach for MySQL Backups

2014-09-08 Thread Pedro M. S. Oliveira

Hi,

I wrote a simple how to on how to do this, have a look here:

 * https://www.linux-geex.com/mysql-zrm-backuppc-centos-7/

Cheers,
Pedro Oliveira

On 09/08/2014 04:09 PM, Francisco Suarez wrote:
What would be a good approach for backing up MySQL databases on hosts 
with BackupPC?



--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
--
Pedro M. S. Oliveira
IT Consultant   
Email: pmsolive...@gmail.com
URL: http://www.linux-geex.com
--

--
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_nightly runs Out of Memory

2014-01-16 Thread Pedro M. S. Oliveira

Hello
What's the backuppc version? How many files are in the pool?
On 01/16/2014 11:15 AM, Remi Verchere wrote:

Dear all,

I am using backuppc for days, with the following set-up:
- Host with 2 CPU, 2GB of RAM, 16GB of swap
- Debian Wheezy 64 bits, BackupPC sid package (3.3.0)
- 38 hosts backuped
  - 38 full backups of total size of 500 GB
  - 130 inc backups of total size 220 GB
- Backup are on a NFS mount through an OpenMediaVault and JFS partition

When the BackupPC_nightly runs, it goes to an Out Of Memory issue:
$ sudo -u backuppc /usr/share/backuppc/bin/BackupPC_nightly 2 3
BackupPC_stats 2 = pool,0,0,0,0,0,0,0,0,0,0,
BackupPC_stats 3 = pool,0,0,0,0,0,0,0,0,0,0,
BackupPC_stats 2 = cpool,13502,17,719152,678028,0,0,7,7,0,1336,29727
Out of memory!

I see that BackupPC_nightly takes all the swap memory, and after a 
whil oom-killer is invoked.


Any idea on how I can resolve this issue ?

Thanks,

Rémi


--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today.
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


--
--
Pedro M. S. Oliveira
IT Consultant   
Email: pmsolive...@gmail.com
URL: http://www.linux-geex.com
Telefone: +31646899378 or +351965867227
Skype: x_pedro_x
--

--
CenturyLink Cloud: The Leader in Enterprise Cloud Services.
Learn Why More Businesses Are Choosing CenturyLink Cloud For
Critical Workloads, Development Environments  Everything In Between.
Get a Quote or Start a Free Trial Today. 
http://pubads.g.doubleclick.net/gampad/clk?id=119420431iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] stumped by ssh

2012-10-11 Thread Pedro M. S. Oliveira
Hi,
If you are using a redhat 6 or one of it's clients check if you're using
selinux.  If so configure it's policies to allow ssh with keys loggin or
put it in permissive mode.
Check redhat documentation for details.
Cheers,
Pedro Oliveira
http://www.linux-geex.com
 On Oct 10, 2012 6:25 PM, Robert E. Wooden rbrte...@comcast.net wrote:

  I have been using Backuppc for a few years now. Recently I upgraded my
 machine to newer, faster hardware. Hence, I have experience exchanging ssh
 keys, etc.

 It seems I have one client that refuses to connect via ssh. When I
 exchanged keys and ran ssh -l root *[clientip]* whoami the client
 properly returns 'root'. When I sudo -u backuppc
 /usr/share/backuppc/BackupPC_dump -v -f *[clienthostname]* I get 'dump
 failed: Unable to read 4 bytes'.

 I have checked versions of rsync, ssh and openssh-server on both the
 backuppc machine and the client. They are the same on both (both running
 Ubuntu 12.04LTS.)

 I have tailed the clients auth.log file it shows sshd[10109]: Connection
 closed by *[backuppc-ip]* [preauth] so, the backuppc machine is closing
 the connection. (I might be tailing the wrong file, I am not sure.)

 Why won't this client connect?

 --
 Robert Wooden
 Nashville, TN. USA

 Computer Freedom? . . . Linux



 --
 Don't let slow site performance ruin your business. Deploy New Relic APM
 Deploy New Relic app performance management and know exactly
 what is happening inside your Ruby, Python, PHP, Java, and .NET app
 Try New Relic at no cost today and get our sweet Data Nerd shirt too!
 http://p.sf.net/sfu/newrelic-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] problems on Red Hat Enterprise Linux 6.0

2012-04-09 Thread Pedro M. S. Oliveira
Check out selinux, use audit2allow to enable ssh root access.
On the other side it appears that you have a dirty shell you may try rsync
-e. Another guess is that you are trying to backup a device, socket or
other special file and still another one, check the backuppctopdir
(wherever its mounted)
Cheers,
Pedro Oliveira
www.linux-geex.com
 On Apr 9, 2012 7:18 PM, Shang-Lin Chen sc...@gps.caltech.edu wrote:

 I recently upgraded a host from Red Hat Enterprise Linux 4 to Red Hat
 Enterprise Linux 6. Since then, BackupPC backups have not been working.

 I'm using rsync over ssh. I commented out 'Defaults requiretty' from
 visudo on the upgraded host and added '-tt' flags to the ssh command to
 get rid of error messages about lack of a tty. Now, when I start a full
 backup, the backup begins to run, but 20 hours later, it's still running
 until it exits with the message: Aborting backup up after signal ALRM.
 This looks like a timeout. Full backups of this host used to take less
 than four hours.

 When I look in the last bad XferLog, I see the following:

 full backup started for directory / (baseline backup #433)
 Running: /usr/bin/ssh -tt -q -x -l backuppc myhostname sudo /usr/bin/rsync
 --server --sender --numeric-ids --perms --owner --group -D --links
 --hard-links --times --block-size=2048 --recursive --one-file-system
 --ignore-times . /
 Xfer PIDs are now 15493
 Got remote protocol 1701274484
 Fatal error (bad version): tcgetattr: Invalid argument

 Sent exclude: /proc
 Sent exclude: /tmp
 Sent exclude: /wavearchive*
 Remote[66]: nvalid argument

 followed by a very long list of symbols and characters that probably
 indicate files being backed up, such as:

 �I�O�. �VcOmA  mnt �  K�A
 lost+found@ْ_O�A  netC�`O�A  media �  K  dev  yk�O  boot �x`OmA
 .autorelabel�`O��  srv �  K�A  opt ;iO .autofsck�`O��  etc0 j O�A
  backup1 +[jO�A  usr ��iO�A  sys �`O  home �  K  cgroup O�=L  sbin0:tOmA
  miscC�`O�A  var M�`O  selinux �o`O  root �� OhA  www rcO�A  bin :tOmA
  lib08tO  opt/sun ;iO�A:  /javadb оiO8  /register.htmlC4(��J��8  demo
 ξiO�A:  ocs ϾiO:  javadoc ѾiO8  LICENSE^,%��J���
 index.html� �  RELEASE-NOTES.html,V�  NOTICE% �  3RDPARTY�+8  bin ;iO�A:
  lib ξiO8  demo/README.txtZ 1��J��8templates ξiO�A�databases �
  programs 8  templates/server.policy� 3��J��:  databases/toursdb.jar��
 4��J8% ξiO�A�% /log 8 service.propertiesu 4��J��8( g0 ξiO�A8
 log/logmirror.ctrl03��J���- .ctrl0�- 1.dat � seg0/c90.dat`�, 311.dat �,
 150.dat �, 371.dat �, 1f1.dat �, 31.dat@�, 161.dat �, 251.dat@�, 1b1.dat@�,
 2c1.dat �, 101.dat �, 3b1.dat �, a1.dat`�, 111.dat �, 461.dat �, 20.datp�-
 41.dat �, 51.dat@�, 4c0.dat��, 230.dat@ �, 1d1.dat �, 4d1.dat��, 361.dat
 �- 40.dat :, b1.dat 4��J:, 331.dat 3��J�, 260.dat �, 3a1.dat �, 41.dat@�,
 200.dat �- d0.dat �, 420.dat0�, 130.dat �, 211.dat �, 10.dat �, 3c0.dat �-
 00.dat �, 1a1.dat :, f0.dat 4��J:, 180.dat�3��J�, 221.dat �, 411.dat �,
 510.dat �, 121.dat :, c0.dat 4��J:, 281.dat 3��J�, 450.dat0�, 501.dat �,
 2a1.dat �, 4f0.dat �, 1c0.dat �, 290.dat �, 480.dat  �, 81.dat �, 1e0.dat
 �, 400.dat �, 191.dat@�, 2e1.dat �, 3e1.dat �, 4a1.datp�, 141.dat �,
 380.dat �, 271.dat �, 391.dat �- 51.dat �, 491.datp�- e1.dat�:, e1.dat
 4��J:, 321.dat 3��J�, 2f0.dat �, 441.dat �, 71.dat �, 4b1.dat��, 3d1.dat �,
 431.dat :, d1.dat 4��J:, 171.dat 3��J�, 2b1.dat �, 60.dat �, 3f1.dat �,
 471.dat 8 programs/vtis ξiO�A� simplemobile 8demo.html 3��J���
 csfull.cssV58  localcal ξiO�A8  derbylogo128_bluebg.png� 3��J��8  toursdb
 ξiO�A�  nserverdemo �  scores 8  navbar.html� 3��J���  readme.html� 8
  workingwithderby ξiO�A�  simple 8  vtis/README� 2��J��8 java ξiO�A� data
 � sql � java/org �* /apache �1
 /derbyDemo �; /vtis �@ /core �A snapshot �A example 8A
 core/VTIHelper.java8 2��J���F FlatFileVTI.javaM �F XmlVTI.java�.�F
 VTITemplate.javaC|�FQueryRow.java� �K VTIHelper.java 5�F XMLRow.javau �F
 StringColumnVTI.javaa@�A#snapshot/SubscriptionSignature.javaN �K
 napshotQuery.java| �K ubscription.java�T�A example/VTIs.java� �I
 DerbyJiraReportVTI.java� �I WorldDBSnapshot.java� �I
 ApacheServerLogVTI.javat �I SubversionLogVTI.java� �I LineListVTI.java �I
 PropertyFileVTI.java� � data/DerbyJiraReport.xml � �' svn_log.txt�t

 Could somebody help figure out what's going on here?

 Thanks,
 Shang-Lin


 --
 For Developers, A Lot Can Happen In A Second.
 Boundary is the first to Know...and Tell You.
 Monitor Your Applications in Ultra-Fine Resolution. Try it FREE!
 http://p.sf.net/sfu/Boundary-d2dvs2
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
For Developers, A Lot Can Happen In A Second.
Boundary 

Re: [BackupPC-users] TIMEOUT on SMB causes missed files and undetected partial.

2012-02-22 Thread Pedro M. S. Oliveira
Hello,
I would advise you to use the rsync method on Windows as it's less tricky
than smb.
There's an how to at
www.linux-geex.com
Just search for backuppc on the search field.
Cheers
Pedro Oliveira
On Feb 22, 2012 2:29 PM, Steve jellosk...@gmail.com wrote:



 On Mon, Feb 13, 2012 at 2:07 PM, Les Mikesell lesmikes...@gmail.comwrote:

 On Mon, Feb 13, 2012 at 1:33 PM, Steve jellosk...@gmail.com wrote:

 I am backing a Windows XP PC that is remote (Internet) and it takes
 several hours. I am selectively backing up a small amount of data.
 Somewhere in the middle of the backup, I start getting lines like these:

 NT_STATUS_CONNECTION_INVALID listing \ramu\*

 NT_STATUS_CONNECTION_INVALID listing \temp\*

 NT_STATUS_CONNECTION_INVALID listing \temp_wks_5418\*

 Once I get these lines, they seem to be continues for every subfolder and 
 the files do not get backed up.

 Additionally, those lines are always preceded by a line like this:

 Error reading file 
 \backup\Canon_2GB_SDC\dvd-backup\sees\msp430\MSP430_Console_IAR_STD\timerb.c
  : NT_STATUS_IO_TIMEOUT

 Which is sometimes a large file but often a small file.

 The worst part of this is that the backup misses the remaining data AND 
 gets flagged as Full and not Partial. Therefore Backuppc won't try to 
 continue or do any more with that PC until next week. See the 14917 Xfer 
 errs below.


 I've seen that happen where there was a duplex mismatch between the
 target host and its switch connection (one configured for 'full', one for
 auto, but  full means don't negotiate and auto means assume half if
 there is no negotiation).  Any bad network connection could probably cause
 it.  I'd agree that it should probably be interpreted as a more fatal error.

 --
Les Mikesell
   lesmikes...@gmail.com


 No new news on this. It still happens and I haven't been able to figure it
 out. The network topology is this:

 Windows PC  - (10.10.0.?) SonicWallVPN Client - Internet - SonicWall
 VPN Router (10.10.0.1) - BackupPC on Linux-x64 (10.10.0.12)


 --
 Virtualization  Cloud Management Using Capacity Planning
 Cloud computing makes use of virtualization - but cloud computing
 also focuses on allowing computing to be delivered as a service.
 http://www.accelacomm.com/jaw/sfnl/114/51521223/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Virtualization  Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] trying to improve the speed at which backuppc rsync back up processes a large binary file in incremental backups

2012-01-10 Thread Pedro M. S. Oliveira
Are you sure that the wireless connection is capable of 11 MB, isn't it
11Mb?

Sent from my galaxy tab 10.1.
On Jan 8, 2012 10:05 PM, Les Mikesell lesmikes...@gmail.com wrote:

 On Sun, Jan 8, 2012 at 4:48 AM, John Habermann
 jhaberm...@cook.qld.gov.au wrote:
 
  You can see that the backup of the /opt share takes nearly the total
  time of the incremental taking about 8 and half hours to complete while
  the backup of the /opt rsync share in the full backup takes about 3 and
  half hours. The full backup is slightly longer than what it takes if I
  just do a rsync over ssh copy of the file from the client server to the
  backup server.
 
  I have found that rsync seems to always transfer the whole file when
  copying this file from the client server to the backup server:
 
  # rsync -avzh --progress -e ssh
  administrator@isabella:ExchangeDailyBackup.bkf ExchangeDailyBackup.bkf
  Password:
  receiving incremental file list
  ExchangeDailyBackup.bkf
   54.44G 100%   10.66MB/s1:21:10 (xfer#1, to-check=0/1)4
 
  sent 3.31M bytes  received 3.27G bytes  486.33K bytes/sec
  total size is 54.44G  speedup is 16.65.

 Note that you have used the -z option with native rsync, which
 backuppc doesn't support.  You can add the -C option to ssh to get
 compression at that layer when you run rsync over ssh, though.

  My questions for the list are:
  1. Is it reasonable for rsync to transfer the whole file when copying a
  large ntbackup file?

 Yes, those files may have little or nothing in common with the
 previous copy.   If compression or encryption are used they will
 ensure that no blocks match and even if they aren't, the common blocks
 may be skewed enough that rsync can't match them up.

  2. Why does an incremental backup of this file take so much longer than
  a full backup of it or a plain rsync of this file?

 That doesn't make sense to me either.  Are you sure that is consistent
 and not related to something else that might have been using the link
 concurrently?

 --
   Les Mikesell
 lesmikes...@gmail.com


 --
 Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
 infrastructure or vast IT resources to deliver seamless, secure access to
 virtual desktops. With this all-in-one solution, easily deploy virtual
 desktops for less than the cost of PCs and save 60% on VDI infrastructure
 costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 8.030.000, Too much files to backup ?

2011-12-19 Thread Pedro M. S. Oliveira
sorry to take long to reply.
yes it saves me a lot of time,  let me explain.
although I have a fast san and servers the time for fetching lots of small
files is high,  the max bandwidth i could get was about 5MB/s, increasing
concurrecy i can get about 20-40MB/s depending on what im backingup at the
moment.  this way i can get more out of the san and backup server. if i
increase concurrency even more i can reach higher performance but i don't
want to steal all io available for backuppc, but to be sincere I don't need
it anyway as i get really good performance this way.
This setup is running on a large finantional group and it outperforms very
expensive (and complex) proprietary solutions.

the backuppc should have a fairly amount of ram,  cpu,  and isn't
virtualized. in my case a 4 core server and 8GB ram (although it swaps a
bit), i'm also using ssh+ rsync which add some overhead but not critical in
any way.

cheers
pedro
Sent from my galaxy nexus.
www.linux-geex. com
 On Dec 19, 2011 6:05 PM, Jean Spirat jean.spi...@squirk.org wrote:

 Le 18/12/2011 20:44, Pedro M. S. Oliveira a écrit :


 you may try to use a rsyncd directly on the server. This may speed up
 things.
 another thing is to split the large backup into several smaller ones.
  I've an email cluster with 8TB and millions of small files (I'm using
 dovecot),  theres also a san involved. in order to use all the bandwidth
 available I configured backup to run from username starting in a to e,  f
 to j and so on,  then they all run at the same time. incremental take about
 1 hour and full about 5.
 cheers
 pedro


 I directly mount the nfs share on the backuppc server so no need for
 rsyncd here this is like local backup with the NFS overhead of course.

 Do you won a lot from splitting instead of doing just one big backup ? At
 least you seems to have the same kind of file numbers i have.

 regards,
 Jean.

--
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create 
new or port existing apps to sell to consumers worldwide. Explore the 
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
http://p.sf.net/sfu/intel-appdev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 8.030.000, Too much files to backup ?

2011-12-18 Thread Pedro M. S. Oliveira
you may try to use a rsyncd directly on the server. This may speed up
things.
another thing is to split the large backup into several smaller ones.  I've
an email cluster with 8TB and millions of small files (I'm using dovecot),
theres also a san involved. in order to use all the bandwidth available I
configured backup to run from username starting in a to e,  f to j and so
on,  then they all run at the same time. incremental take about 1 hour and
full about 5.
cheers
pedro


Sent from my galaxy nexus.
www.linux-geex. com
 On Dec 16, 2011 9:47 AM, Jean Spirat jean.spi...@squirk.org wrote:

 hi,

  I use backuppc to save a webserver. The issue is that the application
 used on it is making thousand of little files used for a game to create
 maps and various things. The issue is that we are now at 100GB of data
 and 8.030.000 files so the backups takes 48H and more (to help the files
 are on NFS share). I think i come to the point where file backup is at
 it's limit.

   Is any of you reached this kind of issue, how do you solve that ?
 going for a different way of backuping the files by using
 block/partition backup system etc..

 the issue is that so manyfiles make the file by file process very slow.
 I was thinking about block backup but i do not even know tools that does
 it apart R1backup that is a commercial one.

 If anyone here met the same issue and give some pointers it would be
 great even perhaps if someone found a way to continue using backuppc in
 that extreme situation.


 ps: backuppc server and the web server are debian linux,  i use rysnc
 method and backup  the NFS that i mount localy on the backuppc server.

 regards,
 Jean.



 --
 Learn Windows Azure Live!  Tuesday, Dec 13, 2011
 Microsoft is holding a special Learn Windows Azure training event for
 developers. It will provide a great way to learn Windows Azure and what it
 provides. You can attend the event by watching it streamed LIVE online.
 Learn more at http://p.sf.net/sfu/ms-windowsazure
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Learn Windows Azure Live!  Tuesday, Dec 13, 2011
Microsoft is holding a special Learn Windows Azure training event for 
developers. It will provide a great way to learn Windows Azure and what it 
provides. You can attend the event by watching it streamed LIVE online.  
Learn more at http://p.sf.net/sfu/ms-windowsazure___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Disk was too full !!!

2011-12-14 Thread Pedro M. S. Oliveira
If you reduce the full keep time and incremental keep time backuppc will do
it for you
Cheers,
Pedro

Sent from my galaxy tab 10.1.
On Dec 14, 2011 2:21 PM, Luis Calvo lcalvomu...@gmail.com wrote:
--
Cloud Computing - Latest Buzzword or a Glimpse of the Future?
This paper surveys cloud computing today: What are the benefits? 
Why are businesses embracing it? What are its payoffs and pitfalls?
http://www.accelacomm.com/jaw/sdnl/114/51425149/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor BackupPC Performance

2011-07-26 Thread Pedro M. S. Oliveira
Hello,
I ve hosts where I reach 20MBs over Gb links and about 10MBs over fast
ethernet.
You may want to look at the backuppc fs and also to the compression
settings.
Usually I have better performance with BackupPC than with bacula.  I also
have BackupPC running over enterprise servers with 8GB ram,  hw raid,  and
smp.  The host os is generally sles,  centos,  redhat.  The fs ext3 or ext4
but with some tweaks on the fs.  Don't  remember all and can't see it as I'm
on my mobile.
Hope this helps,
Pedro Oliveira
On Jul 26, 2011 5:00 AM, C. Ronoz chro...@eproxy.nl wrote:
 I love BackupPC, it's so much simpler than Amanda and Bacula.

 Unfortunately it's performance is a 10-fold lower on my servers.

 Host User #Full Full Age (days) Full Size (GB) Speed (MB/s) #Incr Incr Age
(days) Last Backup (days) State Last attempt
 antispam backuppc 1 2.0 1.98 2.53 1 1.0 1.0 idle idle
 directadmin01 backuppc 1 2.0 15.30 1.16 1 1.0 1.0 idle idle
 dns1 backuppc 1 2.0 2.44 1.43 1 1.0 1.0 idle idle
 dns2 backuppc 1 2.0 2.35 0.18 1 1.0 1.0 idle idle
 plesk01 backuppc 1 1.9 10.79 0.85 1 0.9 0.9 idle idle

 I wish to back-up a total of 20 servers, but I am not sure if I should
pick BackupPC due to poor performance. The load on the servers is very low
and rsnapshot/bacula back-ups often performed with speeds above 10MB per
second.

 I have disabled compression as this caused a fairly high cpu load on the
servers. The servers are connected via 1gbit network and no more than 2
simultanous back-ups are running.

 What performance have users here reached in terms of MB/sec for complete
servers? And how poor is the 1.5MB/sec average I am having here? The servers
have low load, the network performance is good.
 --


--
 Magic Quadrant for Content-Aware Data Loss Prevention
 Research study explores the data loss prevention market. Includes in-depth
 analysis on the changes within the DLP market, and the criteria used to
 evaluate the strengths and weaknesses of these DLP solutions.
 http://www.accelacomm.com/jaw/sfnl/114/51385063/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki: http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can you run multiple copies?

2011-06-17 Thread Pedro M. S. Oliveira
Hi, 
I run backuppc on an email cluster (imap, dovecot, redhat crm,lvm,multipath) 
serving almost 2000 accounts, due to performance issues i've created 5 hosts 
pointing to the same IP.
one host will backup system files
one host will backup /home/[a-d]*
one host will backup /home/[e-i]*

...
one host will backup /home/[t-z]*

i've also setup the server to allow all hosts to run at the same time, this way 
i can backup all the file in a very fast way.
The servers are enterprise class with SAN and the bottleneck was the small 
files on the imap server storage that would take ages to copy even in such 
hardware.
This setup save me about 8 hours in a full backup and 3-4 on a incremental.
The email server doesn't suffer from significant slow downs during backup.

check rsync inclusion/exclusion rules and setup accordingly on your setup.
Cheers,
Pedro

On Friday 17 June 2011 20:03:10 Robert Augustyn wrote:
 Hi,
 
 Can I run multiple copies of backuppc on the same server?
 
 Thanks,
 
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] NFS woes

2011-04-19 Thread Pedro M. S. Oliveira
Hi,
I would suggest a different a test approach, instead of using dd you should try 
iozone. dd use of IO is sequencial so it's not a typical use of 
storage/disk/etc, it will show you only the max throughput of the device not 
the daily charge on the storage.
Apart from that you may use udp for the data transfer, it won't be as safe but 
to be sincere I don't see a problem there depending on the network confidence 
and isolation. There are other mount options as noatime (both on the storage as 
in the server) that might improve performance.
On the networking level check if  the storage, switches and network card 
support jumbo frames and if so use them.
Cheers,
Pedro


On Tuesday 19 April 2011 15:43:35 Timothy J Massey wrote:
 comfi backuppc-fo...@backupcentral.com wrote on 04/18/2011 06:57:45 PM:
 
  I recently fired up BackupPC as a replacement for our convoluted and
  outdated Amanda setup to backup an environment of about 200 servers.
  So far, I have BackupPC version 3.1.0 installed on an Ubuntu 10.04 
  system. I'm using this to back up a grand total of three systems: 2 
  other Ubuntu machines and the localhost. Everything was running fine
  and dandy when I had all backup data going to local disk. However, 
  I'm having massive performance issues after switching to an NFS 
  mount for my backup data.
  
  Per the instructions, I mounted /var/lib/backuppc to my NFS share. 
  I'm using the following options:
  
  nfs.server.ourstuff.com:/backup/backuppc  /var/lib/backuppc 
  nfs nfsvers=3,tcp,hard,intr,rsize=32768,wsize=32768,bg
  
  These options were recommended to me by Data Domain, the 
  manufacturer of the storage device containing my NFS mount. This 
  device only supports CIFS and NFS, sadly. I have also tried with 
  larger and smaller rsize/wsize, and noatime. 
  
  Everything appears to be working correctly. I can backup and restore
  with no errors. However, the performance is worse than atrocious and
  I can't believe everything is working
 
 I agree strongly with Adam.  Most likely, your problem is not a BackupPC 
 issue, but an NFS issue.  To determine this, do some performance testing 
 with dd.  If you see the same issue there (and I think you will), you can 
 then start to debug your NFS issues.
 
 I also agree strongly with Adam:  NFS and performance do *not* mix.  I 
 have put much work into making NFS work with high performance for use as a 
 VMware datastore.  All of my research led me to believe that it's just not 
 possible without using very high-end hardware with large battery-backed 
 cache.  You might find my research helpful--at the very least, it will 
 show you how to perform the testing using dd. 
 
 http://communities.vmware.com/thread/263165?start=0tstart=0
 
 
 Having said all of the above, the results you are seeing (*minutes* for a 
 webpage to display) do not seem like expected low-end performance, they 
 sound like something is *broken*.  However, it is *very* unlikly that it's 
 BackupPC that is broken, but rather something else at a lower layer. 
 Hopefully, a simple test with dd will help you to reproduce it simply and 
 easily, and let you test things more quickly.
 
 Timothy J. Massey
 
  
 Out of the Box Solutions, Inc. 
 Creative IT Solutions Made Simple!
 http://www.OutOfTheBoxSolutions.com
 tmas...@obscorp.com 
  
 22108 Harper Ave.
 St. Clair Shores, MI 48080
 Office: (800)750-4OBS (4627)
 Cell: (586)945-8796 
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
Benefiting from Server Virtualization: Beyond Initial Workload 
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve 
application availability and disaster protection. Learn more about boosting 
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] bare metal restore?

2011-04-04 Thread Pedro M. S. Oliveira
hi,  I've written about that on my blog some time ago,  its a little how
to.  just search for backuppc on www.linux-geex.com.
cheers
pedro
On Apr 4, 2011 3:36 PM, Tyler J. Wagner ty...@tolaris.com wrote:
 On Mon, 2011-04-04 at 09:51 -0400, Neal Becker wrote:
 Would there be a similar procedure using rsync?

 I've done it using the GUI. Bring up the affected machine on a Live CD,
 run sshd and install the BackupPC root key. Create a mounted filesystem
 tree in /mnt/, and use the GUI to restore there.

 Afterward:

 mount --rbind /dev /mnt/dev
 mount --rbind /proc /mnt/proc
 chroot /mnt
 update-grub
 grub-install /dev/sda
 reboot

 Regards,
 Tyler

 --
 No one can terrorize a whole nation, unless we are all his accomplices.
 -- Edward R. Murrow



--
 Create and publish websites with WebMatrix
 Use the most popular FREE web apps or write code yourself;
 WebMatrix provides all the features you need to develop and
 publish your website. http://p.sf.net/sfu/ms-webmatrix-sf
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki: http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3.2 on SLES11 x64 SP1

2011-03-16 Thread Pedro M. S. Oliveira
In the past I had some installations on SLES10 x64 without issues.
Honestly I don't recall doing anything special to configure it.
Cheers,
Pedro

On Wednesday 16 March 2011 10:02:34 megaram networks wrote:
 Hello all,
 
 does someone do an installation on SLES11 SP1 yet ?
 I tried the same way as on SLES10 but cant get the cgi-interface to run.
 
 Apache 2.2.10
 Premature end of script headers: BackupPC_Admin
 
 
 Michael Zeh
 
 

-- 
--
Pedro M. S. Oliveira Pólo Tecnológico de Lisboa
IT ConsultantEstr. do Paço do Lumiar, 
Lote 06, Edifício Multitech - 2C1
Email: pedro.olive...@dri.pt  1600-546 Lisboa
URL:   http://www.dri.pthttp://www.linux-geex.com
Telefone: +351 21 715 30 55Fax: +351 21 715 30 57
--

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3.2 on SLES11 x64 SP1

2011-03-16 Thread Pedro M. S. Oliveira
In the past I had some installations on SLES10 x64 without issues.
Honestly I don't recall doing anything special to configure it.
Cheers,
Pedro

On Wednesday 16 March 2011 10:02:34 megaram networks wrote:
 Hello all,
 
 does someone do an installation on SLES11 SP1 yet ?
 I tried the same way as on SLES10 but cant get the cgi-interface to run.
 
 Apache 2.2.10
 Premature end of script headers: BackupPC_Admin
 
 
 Michael Zeh
 
 

-- 
--
Pedro M. S. Oliveira Pólo Tecnológico de Lisboa
IT ConsultantEstr. do Paço do Lumiar, 
Lote 06, Edifício Multitech - 2C1
Email: pedro.olive...@dri.pt  1600-546 Lisboa
URL:   http://www.dri.pthttp://www.linux-geex.com
Telefone: +351 21 715 30 55Fax: +351 21 715 30 57
--

--
Colocation vs. Managed Hosting
A question and answer guide to determining the best fit
for your organization - today and in the future.
http://p.sf.net/sfu/internap-sfd2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Craig has posted design details of 4.x to the developers list

2011-03-04 Thread Pedro M. S. Oliveira

Hello all,
I was wondering if a new type of backup could be added to the list (right now 
we have tar,rsync,archive, etc). I was thinking in a LVM snapshot, but the 
backup wouldn't be done on the FS level but at block level.
I'm talking about this because I've a special need for backing up a large FS 
(ext4) that has 16TB with small files (hundreds of millions of them) stored in 
a HP EVA (this is the storage of an IMAP cluster). 
I use backup for 12 other places and I like it a lot, not only on the 
functionality but also on the reliability, right now there's a tape backup 
(with omni backup) for the storage, but the performance is miserable and I need 
to find another solution. I've done tests with BackupPC and the results are 
really nice but these results are done only with a part of the data (20GB, 
small files, and the same FS, using rsync type backup)

I'm not sure about doing the 16TB (performance, backup duration) so I thinking 
in some kind of block device backup.
Idea:
1 - Create lvm snapshot of the block device
2 - Backup lvm snapshot (I could use DD, but then it would be a full backup 
every time I do a backup), something like rsync where the only the changed 
blocks of the block device.

Benefits:
1 - Performance, althoughtthe gains only show after 70% of full disk, 45%-50% 
full disk for small files.
2 - Restore backup directly into volume.
3 - Possibility of mount  on a loop device.

Conns:
The first backup should take ages, and initial FS should have zeros on it's 
free space (so the initial backup can use the compression efficiently)
This approach is only possible on unix/linux FS.
The LVM methods for creating snapshots aren't standard and partitioning / 
volume creation need to be addressed and thought before deployment (is this a 
conn??)

The recover method should be able to restore the block device (in this case an 
LVM volume).
I can see lots of difficulties with this approach but the benefits can be great 
too.
What do you all think about this.
Cheers,
Pedro


On Thursday 03 March 2011 16:48:49 Jeffrey J. Kosowsky wrote:
 Just a plug that Craig Barratt -- the BackupPC creator -- has posted
 several detailed emails to the developers list
 (backuppc-de...@lists.sourceforge.net) outlining the key design and
 feature changes that he is implementing in 4.x.
 
 From all appearances, 4.x is a *substantial* rewrite and includes many
 of the features and goodies that we have all been clamoring for
 particularly regarding getting rid of hard links and the challenges
 they cause in archiving the backups. There also seem to be several
 important extensions (e.g., extended attributes) and efficiency
 improvements.
 
 If anyone is interested in knowing what is in the pipeline and more
 importantly if anyone has comments or suggestions, now is probably the
 time to chime in before things get too fixed in stone. So I would
 suggest that you look at the archives of the developer list and/or
 subscribe to it to see the ongoing discussion.
 
 Craig is incredibly open and welcoming of inputs and feedback.
 (Craig I hope you don't mind my plugging your postings here, but you
 know I'm your fan)
 
 
 
 
 --
 Free Software Download: Index, Search  Analyze Logs and other IT data in 
 Real-Time with Splunk. Collect, index and harness all the fast moving IT data 
 generated by your applications, servers and devices whether physical, virtual
 or in the cloud. Deliver compliance at lower cost and gain new business 
 insights. http://p.sf.net/sfu/splunk-dev2dev 
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Craig has posted design details of 4.x to the developers list

2011-03-04 Thread Pedro M. S. Oliveira
Hi Adam, 
Thanks for the quick response but I doubt I can split 16TB of data and then 
backup in a timely manner.
But it's worth a try.
Cheers,
Pedro


On Friday 04 March 2011 13:27:13 Adam Goryachev wrote:
 On 04/03/11 23:32, Pedro M. S. Oliveira wrote:
  I'm not sure about doing the 16TB (performance, backup duration) so I 
  thinking in some kind of block device backup.
  Idea:
  1 - Create lvm snapshot of the block device
  2 - Backup lvm snapshot (I could use DD, but then it would be a full backup 
  every time I do a backup), something like rsync where the only the changed 
  blocks of the block device.
 
  Benefits:
  1 - Performance, althoughtthe gains only show after 70% of full disk, 
  45%-50% full disk for small files.
  2 - Restore backup directly into volume.
  3 - Possibility of mount  on a loop device.
 
  Conns:
  The first backup should take ages, and initial FS should have zeros on it's 
  free space (so the initial backup can use the compression efficiently)
  This approach is only possible on unix/linux FS.
  The LVM methods for creating snapshots aren't standard and partitioning / 
  volume creation need to be addressed and thought before deployment (is this 
  a conn??)
 
  The recover method should be able to restore the block device (in this case 
  an LVM volume).
  I can see lots of difficulties with this approach but the benefits can be 
  great too.
  What do you all think about this.
 
 Been there, done that, and it works well already (sort of)...
 
 I have a MS Windows VM which runs under Linux and it's 'hard drive' is
 in effect a file. I used to:
 1) use LVM to take a snapshot
 2) copy the raw snapshot image to another location on the same disk
 3) delete the snapshot
 4) split the copy of the image to individual 20M chunks (split)
 5) use backuppc/rsync to backup the chunks
 
 The problem with this is the time for the backup to complete (about 
 hours for steps 1 - 3, and another 1 hour for step 4
 
 Recently, I skipped step 1 and 3 and just shut down the machine before
 taking the copy, this finishes in about 30 minutes now. In any case,
 backuppc handles this quite well.
 
 The reasons for splitting the image are:
 1) backuppc can take better advantage of pooling since most chunks have
 not changed (one big file means the entire file changes every backup).
 2) backuppc doesn't seem to handle really large files with changes in
 them (ie, performance wise it seems to slow things down a lot).
 
 Hope this helps...
 
 PS, I also use the same idea for certain niche applications that produce
 a very large 'backup' file, I split it into chunks before letting
 backuppc back it up. I also ensure they are de-compressed first, letting
 rsync/ssh/backuppc handle the compression at the transport and file
 system levels.
 
 Regards,
 Adam
 
 --
 Adam Goryachev
 Website Managers
 www.websitemanagers.com.au
 
 --
 What You Don't Know About Data Connectivity CAN Hurt You
 This paper provides an overview of data connectivity, details
 its effect on application quality, and explores various alternative
 solutions. http://p.sf.net/sfu/progress-d2d
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira Pólo Tecnológico de Lisboa
IT ConsultantEstr. do Paço do Lumiar, 
Lote 06, Edifício Multitech - 2C1
Email: pedro.olive...@dri.pt  1600-546 Lisboa
URL:   http://www.dri.pthttp://www.linux-geex.com
Telefone: +351 21 715 30 55Fax: +351 21 715 30 57
--

--
What You Don't Know About Data Connectivity CAN Hurt You
This paper provides an overview of data connectivity, details
its effect on application quality, and explores various alternative
solutions. http://p.sf.net/sfu/progress-d2d
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to restore 200GB as fast/easily as possible?

2010-12-21 Thread Pedro M. S. Oliveira
time will also varies with the type of data you are recovering.  small files
take a lot of time
cheers
pedro
On Dec 22, 2010 12:09 AM, Timothy J Massey tmas...@obscorp.com wrote:


 Sent from my iPad

 On Dec 21, 2010, at 4:59 PM, gimili gimil...@gmail.com wrote:

 On 12/21/2010 3:02 PM, Timothy J Massey wrote:

 Tar's needs are pretty low, but if the pool is compressed (or if you
selected a compressed TAR), that could make a big difference. Rsync isn't
going to give you any performance boost on a restore, assuming you're
restoring to an empty folder: *nothing* is going to give you a performance
boost, really. You're simply going to have to wait.

 Even at 10MB/s (you wrote mb/s, which is wrong either way, but I'm
assuming you meant MegaBYTES per second, not MegaBITS per second), it will
only take about 6 hours to restore. By the time you figure out how to make
it faster, it'll probably already be done!

 Not to make you feel worse, but this is why you fully test a backup
system INCLUDING FULL RESTORES before you put it into production. I like to
say you don't have a backup until you restore that backup. Taking the backup
is only half the process...

 Something to think about when it comes to cloud-based backups. We all
know that with the magic of pooling and rsync, the first backup might take a
week, but future backups will only take a few minutes. Unfortunately, that
first *restore* is going to take a week, too... Can you wait that long?
 Thanks Timothy for the great info and advice. Much appreciated! So using
the restore from the web interface and rsync should work reliably if left
over night with 200GB? I can't see any progress bar so it is hard to tell if
is even working. It works with one small directory. How long should it take
before I start seeing folders created?



--
 Forrester recently released a report on the Return on Investment (ROI) of
 Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even
 within 7 months. Over 3 million businesses have gone Google with Google
Apps:
 an online email calendar, and document program that's accessible from
your
 browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki: http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Forrester recently released a report on the Return on Investment (ROI) of
Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even
within 7 months.  Over 3 million businesses have gone Google with Google Apps:
an online email calendar, and document program that's accessible from your 
browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Call timed out: server did not respond after 20000 ms

2010-12-21 Thread Pedro M. S. Oliveira
sorry wrote 10s but its 20
On Dec 22, 2010 12:39 AM, Pedro M. S. Oliveira pmsolive...@gmail.com
wrote:
 i has this problem sometime ago it was due to the av on the windows
server.
 it took just to much time to scan large files and smb client has a fixed
 timeout of 10secs. if u want to change it you'll have to recompile samba.
 On Dec 21, 2010 7:27 PM, Jln backuppc-fo...@backupcentral.com wrote:
 Hey!

 I've a problem with BackupPC.
 The sharingmethode is smb


 I want do to a fullbackup. The backup starts and runs a few mins after
 that I get an error.


 Call timed out: server did not respond after 2 milliseconds listing
 \x\\xx\*

 It's an PC with Windows XP 32Bit and one HDD with 600.000 files (~85GB)
 It's one of ~25 PCs and all other backups works fine.

 I don't understand why does the backup crashes :-/


 Thanks for help :-)

 Julien [Question]

 +--
 |This was sent by jschroe...@kuhnke.de via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--





--
 Forrester recently released a report on the Return on Investment (ROI) of
 Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even
 within 7 months. Over 3 million businesses have gone Google with Google
 Apps:
 an online email calendar, and document program that's accessible from
your

 browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki: http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Forrester recently released a report on the Return on Investment (ROI) of
Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even
within 7 months.  Over 3 million businesses have gone Google with Google Apps:
an online email calendar, and document program that's accessible from your 
browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tar on localhost (and shell escaping)

2010-11-17 Thread Pedro M. S. Oliveira
Hi,
I'm a bit late on this conversation, but why not use rsync instead of tar on 
the local machine?
I'm not talking about rsync over ssh to localhost but plain old rsync. I can 
see some advantages (on speed, diskspace, and cputime).
Here's how i did it:

1 - copy your rsync executable file to your backuppc bin directory
2 - set this rsync suid root, be sure to only allow this rsync to be executed 
by backuppc user.
3 - on localhost setup on backuppc config set:
RsyncClientPath  =  /opt/BackupPC/bin/rsync
RsyncClientCmd  = /opt/BackupPC/bin/rsync $argList+
RsyncClientRestoreCmd  =  $rsyncPath $argList+

This way you can continue to use rsync on the local machine without the 
overhead of ssh and tcp/ip.
Don't forget to save backuppc config (/etc/backuppc) to somewhere else to be 
easier to restore the config in case of need.
And if you are backingup / don't forget to exclude /mnt /dev /sys /media /mnt 
/proc

If you need some assistance with this setup just drop me a line

Cheers,
Pedro




On Tuesday 16 November 2010 22:36:48 Dale King wrote:
 On Sun, Nov 14, 2010 at 09:00:10AM -0500, Jeffrey J. Kosowsky wrote:
  If you are on the same machine, you can do directly without ssh
  by using sudo (and you can protect a little more by setting up sodoers
  properly). Then there is no compression for the local machine...
  
  $Conf{RsyncClientCmd} = '/usr/bin/sudo $rsyncPath $argList+'; 
 
 Sorry for replying out of order but I lost the original post by Frank.
 
 Here is what I use to stop the date being interpreted.  If someone could
 fix the wiki that would be great:
 
 $Conf{TarClientCmd} = '/usr/bin/env LC_ALL=C /usr/bin/sudo /root/tarCreate -v 
 -f - -C $shareName --totals';
 $Conf{TarIncrArgs} = '--newer=$incrDate $fileList+';
 
 
 Where /root/tarCreate is (different to the wiki):
 
 #!/bin/sh -f
 
 exec /bin/tar -c $@
 
 
 --
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
 Spend less time writing and  rewriting code and more time creating great
 experiences on the web. Be a part of the beta today
 http://p.sf.net/sfu/msIE9-sfdev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] unexpected response rsync version 3.0.7 protocol version 30

2010-10-08 Thread Pedro M. S. Oliveira
Hi,
Did you do an update on you server/client?
If I'm not wrong that has to do with Perl Rsync module.
Cheers
Pedro Oliveira

From my android HD2
On 8 Oct 2010 08:50, Saturn2888 backuppc-fo...@backupcentral.com wrote:
 For some reason I get the error unexpected response rsync version 3.0.7
protocol version 30. The first time I received this error was almost 6 days
ago. I haven't restarted the machine, nothing's happened. I have no clue
what's caused this.

 +--
 |This was sent by saturn2...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




--
 Beautiful is writing same markup. Internet Explorer 9 supports
 standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2  L3.
 Spend less time writing and rewriting code and more time creating great
 experiences on the web. Be a part of the beta today.
 http://p.sf.net/sfu/beautyoftheweb
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki: http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc Incremental backup only using 1 CPU core...?

2010-10-07 Thread Pedro M. S. Oliveira
Hey Jaco,
BackupPC will use other cores if you have more than one backup running at a 
time.
Cheers,
Pedro




On Thursday 07 October 2010 13:31:30 Jaco Meintjes wrote:
 Hey Guys, would there be a reason why my backuppc is only using 1 CPU 
 core (per host, I'm currently just backing up 1 host) when doing 
 incremental backups?
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2  L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today.
http://p.sf.net/sfu/beautyoftheweb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up whole linux system

2010-09-29 Thread Pedro M. S. Oliveira
Hi,
I use backuppc for this purpose too, and I'm quite happy with it.
Some time ago i wrote about this and how to do the restore

http://www.linux-geex.com/?s=backuppcx=0y=0#/?p=163

Hope it helps
Cheers
Pedro


On Tuesday 28 September 2010 19:54:41 Boniforti Flavio wrote:
 Hello people.
 
 I need to back up my Debian server, which mainly acts as a gateway
 (iptables) and proxy (squid). I¹d like to back it up in a way that should
 enable me to recover the whole system onto a new harddisk drive, if the
 actual one would fail.
 Is backupPC right for this purpose, or would it be better to take some sort
 of ³snapshots² with some other software/tool?
 
 Many thanks in advance.
 
 
 Flavio Boniforti
 
 PIRAMIDE INFORMATICA SAGL
 Via Ballerini 21
 6600 Locarno
 Switzerland
 Phone: +41 91 751 68 81
 Fax: +41 91 751 69 14
 Url: http://www.piramide.ch
 E-mail: fla...@piramide.ch--
 
  
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--
--
Start uncovering the many advantages of virtual appliances
and start using them to simplify application deployment and
accelerate your shift to cloud computing.
http://p.sf.net/sfu/novell-sfdev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc 3.2.0 deb

2010-09-08 Thread Pedro M. S. Oliveira
Sorry, I use OpenSuSE or SLES.
Cheers,
Pedro

On Wednesday 08 September 2010 14:14:37 B. Alexander wrote:
 Has anyone tried the deb that I rolled? I haven't had a chance to install
 myself, but was mildly surprised to get no feedback. I figure that means
 either it worked or nobody's tried it. :)
 
 --b
 

-- 
--
Pedro M. S. Oliveira Pólo Tecnológico de Lisboa
IT ConsultantEstr. do Paço do Lumiar, 
Lote 06, Edifício Multitech - 2C1
Email: pedro.olive...@dri.pt  1600-546 Lisboa
URL:   http://www.dri.pthttp://www.linux-geex.com
Telefone: +351 21 715 30 55Fax: +351 21 715 30 57
--
--
This SF.net Dev2Dev email is sponsored by:

Show off your parallel programming skills.
Enter the Intel(R) Threading Challenge 2010.
http://p.sf.net/sfu/intel-thread-sfd___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3.2.0 released

2010-08-03 Thread Pedro M. S. Oliveira
   DumpPreUserCmd fails.  Reported by John Rouillard.
 
 * Updated BackupPC.pod for $Conf{BackupsDisable}, reported by
   Nils Breunese.
 
 * Added alternate freebsd-backuppc2 init.d script that is
   more compact.  Submitted by Dan Niles.
 
 * Minor updates to lib/BackupPC/Lang/fr.pm from Nicolas STRANSKY
   applied by GFK, and also from Vincent Fleuranceau.
 
 * Minor updates to lib/BackupPC/Lang/de.pm from Klaus Weidenbach.
 
 * Updates to makeDist for command-line setting of version and
   release date from Paul Mantz.
 
 * Add output from Pre/Post commands to per-client LOG file, in addition
   to existing output in the XferLOG file.  Patch from Stuart Teasdale.
 
 * lib/BackupPC/Xfer/Smb.pm now increments xferErrCnt on
   NT_STATUS_ACCESS_DENIED and ERRnoaccess errors from smbclient.
   Reported by Jesus Martel.
 
 * Removed BackupPC_compressPool and BackupPC::Xfer::BackupPCd.
 
 
 --
 The Palm PDK Hot Apps Program offers developers who use the
 Plug-In Development Kit to bring their C/C++ apps to Palm for a share
 of $1 Million in cash or HP Products. Visit us here for more details:
 http://p.sf.net/sfu/dev2dev-palm
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
The Palm PDK Hot Apps Program offers developers who use the
Plug-In Development Kit to bring their C/C++ apps to Palm for a share
of $1 Million in cash or HP Products. Visit us here for more details:
http://p.sf.net/sfu/dev2dev-palm
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC for Windows and ESXi best practices

2010-06-28 Thread Pedro M. S. Oliveira
Hi,
To backup Windows with backuppc I usually do the following:
1 - create a ntbackup routine to save systemstate.
2 - backup with backuppc the whole c: drive (and others if you have them) using 
rync service in windows (including systemstate backup files).

to restore:
1 - install windows (same version).
2 - restore backup files (including systemstate backup files)
3 - restore systemstate with ntbackup.

Now you should try the method and see if it works for you.
Cheers,
Pedro



On Saturday 26 June 2010 16:34:06 B. Alexander wrote:
 Hey folks,
 
 I've been using BackupPC to back up my network for something like 4 years. I
 am quite comfortable with that part. This query is more general. All of the
 machines on my network are Linux, so my experience is very...one sided.
 
 When I back up a Linux box, my goal is to preserve not only the unique data
 such as /home, but enough of the OS so that I can do a base install (in my
 case, Debian), grab and install the original package list, and restore from
 backups...A process which takes, say, an hour from start to finish for a VM.
 
 
 Windows, OTOH, is a completely new animal for me. Can anyone give me some
 best practices on what to back up on a Windows VM? Or would it be better to
 just back up the VM lock, stock and snapshot? The VM is running on VMware
 ESXi 4.0, so I am still learning how snapshots and what not work.
 
 What are the best practices for Windows (and VMware ESXi) backups?
 
 Thanks,
 --b
 

-- 
--
Pedro M. S. Oliveira  Pólo Tecnológico de Lisboa
IT ConsultantEstrada do Paço do Lumiar, Lote 1
Email: pedro.olive...@dri.pt   Sala 14 – 1600-546 Lisboa
URL:   http://www.dri.pt http://www.linux-geex.com
Telefone: +351 21 715 30 55  Fax: +351 21 715 30 57
--

--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] A restore question

2010-06-26 Thread Pedro M. S. Oliveira
a very quick hint: smb had a 2gb limitation.
or
the smb transfer times out at 20seconds sometimes this happens because
of av software.
hello from portugal

2010/6/25, Almeida backuppc-fo...@backupcentral.com:

 Just for your information: i'm running BackupPC on a Debian server and i've
 tried to restore a file previously backuped on a Windows(w2k3) client, to
 another Windows(w2k3) client. The transfer method i've choosed was smb.

 Thanks again.

 +--
 |This was sent by m.alme...@comunique-se.com.br via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--



 --
 This SF.net email is sponsored by Sprint
 What will you do first with EVO, the first 4G phone?
 Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] speed up backups

2010-05-26 Thread Pedro M. S. Oliveira
://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync backup won't run

2010-05-26 Thread Pedro M. S. Oliveira
Hi
Sometime ago I had a problem similar to that with solaris too and in the end i 
had trash on the shell, you may try to pass -e ''  to rsync.
Cheers,
Pedro


On Tuesday 25 May 2010 20:17:35 David Wraige wrote:
 Hi there,
 
 I'm having major trouble getting BackupPC to run Rsync backups. I'd be
 very grateful if anyone can offer some help.
 
 I have just installed BackupPC on an Opensolaris 2009.06 (build 134)
 system. That in its own right was a major task, and I'd be happy to
 share my experiences in getting it installed and (nearly!) functional
 with anyone else - in due course I'll try to get round to writing a
 how-to for the wiki.
 
 When I first installed it there was no joy (for the same reasons as
 below), but then it started to work. I made no configuration changes, it
 just started working. Then it just stopped again.
 
 The problem is that I'm receiving the Unable to read 4 bytes error
 every time I start a backup. The log reads as follows:
 
 full backup started for directory /home
 Running: /usr/bin/ssh -q -x -l david aragorn /usr/bin/rsync --server
 --sender --numeric-ids --perms --owner --group -D --links --hard-links
 --times --block-size=2048 --recursive --ignore-times . /home/
 Xfer PIDs are now 660
 Read EOF:
 Tried again: got 0 bytes
 Done: 0 files, 0 bytes
 Got fatal error during xfer (Unable to read 4 bytes)
 Backup aborted (Unable to read 4 bytes)
 Not saving this as a partial backup since it has fewer files than the
 prior one (got 0 and 0 files versus 0)
 
 I know that the ssh keys are correctly set up. The following run on the
 BackupPC server as user backuppc shows that:
 
 -bash-4.0$ whoami  hostname
 backuppc
 saruman
 -bash-4.0$ ssh -l david aragorn whoami  hostname
 david
 aragorn
 
 If I run the BackupPC_dump command manually, I get the following:
 
 -bash-4.0$ /usr/local/BackupPC/bin/BackupPC_dump -vf aragorn
 cmdSystemOrEval: about to system /usr/sbin/ping -s aragorn 56 1
 cmdSystemOrEval: finished: got output PING aragorn: 56 data bytes
 64 bytes from aragorn.wraige.com (192.168.1.107): icmp_seq=0. time=1.556 ms
 
 aragorn PING Statistics
 1 packets transmitted, 1 packets received, 0% packet loss
 round-trip (ms)  min/avg/max/stddev = 1.556/1.556/1.556/-NaN
 
 cmdSystemOrEval: about to system /usr/sbin/ping -s aragorn 56 1
 cmdSystemOrEval: finished: got output PING aragorn: 56 data bytes
 64 bytes from aragorn.wraige.com (192.168.1.107): icmp_seq=0. time=1.551 ms
 
 aragorn PING Statistics
 1 packets transmitted, 1 packets received, 0% packet loss
 round-trip (ms)  min/avg/max/stddev = 1.551/1.551/1.551/-NaN
 
 CheckHostAlive: returning 1.551
 full backup started for directory /home
 started full dump, share=/home
 Running: /usr/bin/ssh -q -x -l david aragorn /usr/bin/rsync --server
 --sender --numeric-ids --perms --owner --group -D --links --hard-links
 --times --block-size=2048 --recursive --ignore-times . /home/
 Xfer PIDs are now 763
 xferPids 763
 Got remote protocol 30
 Negotiated protocol version 28
 
 It hangs forever at that point. If I understand correctly that means
 that the rsync server has started on the client and the following
 confirms this:
 
 da...@aragorn:~$ pgrep -l rsync
 25014 rsync
 
 I just can't figure out why it won't actually do the transfer, and I've
 run out of debugging ideas. If anyone can help me sort this out I'd be
 very grateful.
 
 Thanks in advance,
 David
 
 --
 
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bare metal restore

2010-05-10 Thread Pedro M. S. Oliveira
Some time ago i wrote about thins in my blog, check it out:
http://www.linux-geex.com/?s=backuppcx=0y=0#/?p=163
Cheers,
Pedro 


On Monday 10 May 2010 13:02:53 Boniforti Flavio wrote:
 Hello list...
 
 I was wondering if I may be doing some sort of bare metal restore of a
 Linux server, if I'd be backing it up *completely* on my backuppc
 server.
 
 What do you think?
 How may I eventually achieve this sort of imaging from my other
 server?
 
 Thanks,
 Flavio Boniforti
 
 PIRAMIDE INFORMATICA SAGL
 Via Ballerini 21
 6600 Locarno
 Switzerland
 Phone: +41 91 751 68 81
 Fax: +41 91 751 69 14
 URL: http://www.piramide.ch
 E-mail: fla...@piramide.ch 
 
 --
 
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Max file sizes

2010-01-24 Thread Pedro M. S. Oliveira

Currently I have some vmware machine files with over 120GB and the backups go 
smooth.
I've used tar and rsync but I've had trouble when I had compression  6.
The specs for the backup server are Dell rack server 2x dual core, 6x 750GB 
SATAII with raid 5, 6TB ram, raid controller has 512 MB.

This server backups up about 17 servers and 2 desktops 
*  31 full backups of total size 2071.27GB (prior to pooling and 
compression),
* 195 incr backups of total size 1841.14GB (prior to pooling and 
compression). 
# Pool is 1505.93GB comprising 5443879 files and 4369 directories (as of 1/24 
05:11),
# Pool hashing gives 252 repeated files with longest chain 18,
# Nightly cleanup removed 55176 files of size 220.26GB (around 1/24 05:11),
# Pool file system was recently at 69% (1/24 19:01), today's max is 84% (1/24 
01:00) and yesterday's max was 84%. 

The server has a backup concurrency of 4 (during nights)
The load average is about 10-12 during night


The same backuppc setup with lower specs would give trouble from time to time 
and had to limit the concurrency to 2 and compression to 2
Cheers,
Pedro


On Friday 22 January 2010 12:33:36 Simon Fishley wrote:
 Hi All
 
 The FAQ here 
 http://backuppc.sourceforge.net/faq/limitations.html#maximum_backup_file_sizes
 suggests there are limitations to the maximum size of a file that can
 be backed up, specifically when using GNUtar or smbclient.
 
 I have several users running VirtualBox under Ubuntu and, regularly,
 these virtual machine files exceed 10GB for an individual file. We areid
 using rsync to backup the data and the FAQ suggests more testing is
 needed to be certain of the max file size rsync can manage.
 
 Has anyone done any of this testing? Can anyone share with me how they
 are backing up virtual machine files?
 
 Thanks
 Simon
 
 --
 Throughout its 18-year history, RSA Conference consistently attracts the
 world's best and brightest in the field, creating opportunities for Conference
 attendees to learn about information security's most important issues through
 interactions with peers, luminaries and emerging and established companies.
 http://p.sf.net/sfu/rsaconf-dev2dev
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
Throughout its 18-year history, RSA Conference consistently attracts the
world's best and brightest in the field, creating opportunities for Conference
attendees to learn about information security's most important issues through
interactions with peers, luminaries and emerging and established companies.
http://p.sf.net/sfu/rsaconf-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC on OpenSolaris

2009-09-23 Thread Pedro M. S. Oliveira
I did this sometime ago,
What's failling?
On Tuesday 22 September 2009 20:29:02 Linker3000 wrote:
 
 Spent a day trying to make this work and have just given up!
 
 Looks like things have moved on and this guide needs updating - I had to do a 
 lot more work to get the package installer  perl installed and then the link 
 between cgi-bin/index.cgi just didn't go anywhere so there was nothing to run.
 
 Might try again in a few days when I feel less frustrated. Soo much easier on 
 Linux, but I wanted to try the benefits of ZFS  [Shocked]
 
 +--
 |This was sent by linker3...@googlemail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 --
 Come build with us! The BlackBerryreg; Developer Conference in SF, CA
 is the only developer event you need to attend this year. Jumpstart your
 developing skills, take BlackBerry mobile applications to market and stay 
 ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
 http://p.sf.net/sfu/devconf
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
Come build with us! The BlackBerryreg; Developer Conference in SF, CA
is the only developer event you need to attend this year. Jumpstart your
developing skills, take BlackBerry mobile applications to market and stay 
ahead of the curve. Join us from November 9#45;12, 2009. Register now#33;
http://p.sf.net/sfu/devconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Using a recover CD to restore a backup made with BackupPC - BackupPC as disaster recovery

2009-08-21 Thread Pedro M. S. Oliveira
Hi've written a little how to on my blog about:
Using a recover CD to restore a backup made with BackupPC - BackupPC as 
disaster recovery
http://www.linux-geex.com/?p=163
I'm thinking in making a boot cd to restore all the backuppc data to a new 
machine, does any1 has suggestions about this?

Cheers,
Pedro
-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://www.linux-geex.com
Cellular: +351 96 5867227
--

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] HowTo backup __TOPDIR__?

2009-08-06 Thread Pedro M. S. Oliveira
while reading linux journal look at what i found... maybe it will do what
you want

  You can use the dd and nc commands for exact disk mirroring from one
server to another. The following commands send data from Server1 to Server2:


Server2# nc -l 12345 | dd of=/dev/sdb

Server1# dd if=/dev/sda | nc server2 12345

 Make sure that you issue Server2's command first so that it's listening on
port 12345 when Server1 starts sending its data.

Unless you're sure that the disk is not being modified, it's better to boot
Server1 from a RescueCD or LiveCD to do the copy.
source:

http://www.linuxjournal.com/content/tech-tip-remote-mirroring-using-nc-and-dd
http://www.linux-geex.com
__


On Thu, Aug 6, 2009 at 9:19 PM, Matthias Meyer matthias.me...@gmx.liwrote:

 Thomas Birnthaler wrote:

  What is the best way to syncronize __TOPDIR__ to another location?
  As I found in many messages, rsync isn't possible because of
  expensive memory usage for the hardlinks.
  Since version 3.0.0 (protocol 3 on both ends) rsync uses an
  incremental mode to generate and compare the file lists on both sides.
  So memory usage decreased a lot, because just a small part of the list
  is in memory all the time. But the massive hardlink usage of BackupPC
  causes very slow copying of the whole structure, because link creation
  on any filesystem seems to be a very expensive task (locks?) ...
 

 So it seems possible to make the initial remote backup by dd and after this
 a daily rsync?
 Because on a daily basis are (hopefully) not tons of new hardlinks which
 have to be created on the remote side.

  In my opinion dd or cp -a isn't possible too because they would copy
  all the data. That would consume to much time if I syncronize the
  locations on a daily basis.
  Any other tool has the same time consumption if it keeps hardlinks
  (cp e.g. does that with option -l).
 
  A somehow lazy solution would be to just copy the pool-Files (hashes
  as file names) by rsync and create a tar archive of the pc
  directory.

 I would believe that creating of the tar archive and copying it to the
 other
 location will consume nearly the same time, space and bandwidth as dd or
 cp. Isn't it?

  The time consuming process of link creation is then deferred
  to the restore case (which may never be needed).
 
  Thomas

 br
 Matthias
 --
 Don't Panic



 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
 trial. Simplify your report design, integration and deployment - and focus
 on
 what you do best, core application coding. Discover what's new with
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ZFS/Nexenta Ready

2009-07-14 Thread Pedro M. S. Oliveira
Hi,
I agree with you Dan,
I'm a Linux/Solaris admin for several years now and this las weekend I was
working on large migration on a telco, it is  the production machine for
billing, traffic analysis and so on. I'm talking about a machine with 64
processors and 128 GB RAM. If you want you may see more about this in
http://www.linux-geex.com/?p=47 (although the theme is not the migration
itself).
I'm also a fan of ZFS... better saying I was a fan of ZFS.
ZFS performs very poorly in comparison to VeritasFS (didn't compare with UFS
because they are too distinct), database writing was lousy, although we had
gains in reading, that was to be expected, the system gained a new faster
storage, double RAM and double processors, so if it wasn't faster it would
be a disaster.
I had already used backuppc with ZFS in a 3 TB pool and it wasn't faster
either, but as I was using FS compression I thought it was coming from
there.
Right now SUN support is working on ZFS on the server I was writing about
but on the table all options are open now including getting back the whole
production system to Veritas.

Another thing is that ZFS is taking the kernel processor usage (at some
points not all the time) over 50-60% and this for a minute or two and that
is not good at all. About the RAM usage, ZFS is limited to 20 GB ram so it
doesn't spread around the different boards (this could be bad in case of
hardware failure) and even with 20GB available only to FS... it's using it
all.

To be sincere I was a bit disappointed, I've been using ZFS on several
smaller servers, but after this I'm not that happy.
I'm hoping SUN can help us out with this one, patching is on the way lets
hope it works.
I don't recommend you to migrate your servers right away to ZFS, as I've
seen it may work very well, but it can also be miserable.
Cheers,
Pedro


On Tue, Jul 14, 2009 at 3:27 AM, dan danden...@gmail.com wrote:

 you win this one Carl!  Yes, thats what I meant.  It would be more accurate
 to say that nexenta is the only *solaris OS that has an installation package
 available AND that is installable out of the box.  All the others require
 some luv to get backuppc going on because of dependancies.

 FreeBSD is pretty easy to install backuppc on but ZFS is not in a stable
 state there so no really good reason to go with that over a linux.


 On Mon, Jul 13, 2009 at 6:11 AM, Carl Wilhelm Soderstrom 
 chr...@real-time.com wrote:

 On 07/10 06:15 , dan wrote:
  The ONLY distribution that has backuppc available in the repos (bpc3.0)

 You mean the only opensolaris distro that has backuppc packaged?

 There's plenty of Linux packages out there. :)

 --
 Carl Soderstrom
 Systems Administrator
 Real-Time Enterprises
 www.real-time.com


 --
 Enter the BlackBerry Developer Challenge
 This is your chance to win up to $100,000 in prizes! For a limited time,
 vendors submitting new applications to BlackBerry App World(TM) will have
 the opportunity to enter the BlackBerry Developer Challenge. See full
 prize
 details at: http://p.sf.net/sfu/Challenge
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




 --
 Enter the BlackBerry Developer Challenge
 This is your chance to win up to $100,000 in prizes! For a limited time,
 vendors submitting new applications to BlackBerry App World(TM) will have
 the opportunity to enter the BlackBerry Developer Challenge. See full prize
 details at: http://p.sf.net/sfu/Challenge
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Enter the BlackBerry Developer Challenge  
This is your chance to win up to $100,000 in prizes! For a limited time, 
vendors submitting new applications to BlackBerry App World(TM) will have
the opportunity to enter the BlackBerry Developer Challenge. See full prize  
details at: http://p.sf.net/sfu/Challenge___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync troubles on fedora 64bit client

2009-05-04 Thread Pedro M. S. Oliveira
Hi! 
Well I can't really help you the issue you have because I don't use Fedoras on 
any of my systems, but if you suspect it's rsync why not compile or use another 
version of rsync to prove it?
Cheers
Pedro

On Friday 01 May 2009 21:19:07 gregor wrote:
 
 hi,
 
 i worked with backuppc over two yeas but now i have troubles with my new 
 client.
 i upgraded my clients hardware to 64bit on fedora 10, my server (i686 
 centos5) can't make working backups.
 when i start the backups manually i get an error message after the backup 
 runs about 5 minutes and the error message is always on another file. so i 
 think the error don't depends on the file.
 
 Read EOF#58; 
 Tried again#58; got 0 bytes
 Can't write 4 bytes to socket
 finish#58; removing in-process file pictures/0083.JPG
 Child is aborting
 Done#58; 1394 files, 4733617204 bytes
 Got fatal error during xfer #40;aborted by signal=PIPE#41;
 Backup aborted by user signal
 Saving this as a partial backup
 dump failed#58; aborted by signal=PIPE
 
 can anybody help?
 
 greetings from austria
 gregor
 
 +--
 |This was sent by gregor.bin...@gmx.net via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 --
 Register Now  Save for Velocity, the Web Performance  Operations 
 Conference from O'Reilly Media. Velocity features a full day of 
 expert-led, hands-on workshops and two days of sessions from industry 
 leaders in dedicated Performance  Operations tracks. Use code vel09scf 
 and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Register Now  Save for Velocity, the Web Performance  Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance  Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Ubuntu Stability

2009-04-30 Thread Pedro M. S. Oliveira
Something like this happened to me few time ago and I use SLES, suspected 
kernel probs, processor, memory, did a bunch of testing and i couldn't get the 
cause. Memory tests were running through the night and still no problem. But 
backuppc does stress up the components, encryption, compression, heavy IO, i 
bought some new ram (the most inexpensive item) and started changing exchanging 
hardware, fortunately it was the ram and as it still was in warranty they 
replaced it for free. Since then no probs.
The funny thing is that faulty ram didn't hang up or gave errors in memory 
tests, while proving i would have some errors compiling the kernel, but never 
had problems outside of backuppc.
My advice is that you should check your hardware, and even other conditions as 
temperature. 
If you want to test with other besides backuppc try to compile your kernel (not 
for usage but as a stress test).
Cheers


On Thursday 30 April 2009 17:51:05 Christopher Derr wrote:
 I'm currently running the latest backuppc version that Ubuntu officially 
 supports (it's behind Debian as far as I can tell and I haven't tried to 
 use the Debian version): 3.0.0.  Apt-get shows it's the latest available 
 through Stable.  Anyway, the system (Tyan 2912G2NR, 8 GB memory, 4 TB in 
 RAID 5) crashes often.  Becomes untouchable, I go to the console, hit 
 enter to bring up a logon prompt, then the machine is officially 
 frozen.  I figure it's a kernel panic, but I'm not seeing anything 
 telling in any log I can find.  This happens almost exclusively when 
 backing up a one of our Windows fileservers using rsync with 700 GB+ 
 data over our 1 Gb link. 
 
 My thoughts are it could be Ubuntu or it could be the hardware.  They 
 system doesn't seem to have any issues except on this one machine's 
 backups, and even then it's not every time (just most of the time).  I'm 
 considering moving to Debian and the latest version of Backuppc (3.1...I 
 realize 3.2 is still in beta).  I think I just need to backup my 
 /etc/backuppc config files but if I keep the /var/lib/backuppc mount, I 
 should be able to reinstall the system without affecting the backups.  
 Not sure if the 3.1 upgrade is going to talk with the 3.0 backup files 
 though.
 
 Thoughts about Ubuntu or my upgrade in general?
 
 Thanks,
 
 Chris
 
 --
 Register Now  Save for Velocity, the Web Performance  Operations 
 Conference from O'Reilly Media. Velocity features a full day of 
 expert-led, hands-on workshops and two days of sessions from industry 
 leaders in dedicated Performance  Operations tracks. Use code vel09scf 
 and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Register Now  Save for Velocity, the Web Performance  Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance  Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is backuppc with rsyncrypto instead rsync possible

2009-04-29 Thread Pedro M. S. Oliveira
When you use rcync method you are using encryption as it relies on the ssh 
protocol for the transfer.
If you want to encrypt your backup data you may encrypt the partition where the 
data will be store or if you don't want to re-partition your HD you may create 
a crytofs on a file (just be sure the file is large enought for your backup).
Hope it helps,
Cheers,
Pedro
On Tuesday 28 April 2009 20:01:45 Matthias Meyer wrote:
 Hi,
 
 I think about an encrypted backup and find rsyncrypto.
 Is there a BackupPC_dump support for rsyncrypto?
 Or any other way to use rsyncrypto with backuppc?
 
 Thanks
 Matthias


-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Register Now  Save for Velocity, the Web Performance  Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance  Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] FS and backuppc performance

2009-03-19 Thread Pedro M. S. Oliveira
With the amount of data I reported and number of files I just have 6% of inodes 
occupied so I don't think that is really a problem, do you use XFS for any 
special purpose besides dynamic inode creation? What do you think about 
recovery and maintenance tools for XFS. And least but not lest don't you have a 
bigger processor overhead with XFS?

Usually people tend to say processor is not important while backing up but from 
what I've seen if you have like 8 or more hosts backing up data the processor 
and memory are stressed up. if you have to manage a FS with a large processor 
demand can't this be a bottleneck?
Cheers,
Pedro M. S. Oliveira

On Wednesday 18 March 2009 19:30:33 Carl Wilhelm Soderstrom wrote:
 On 03/18 05:48 , Pedro M. S. Oliveira wrote:
  What FS do you guys use recommend/used and why?
 
 I typically use XFS for backuppc data pools, and ext3 for the root
 filesystem. I don't want to run out of inodes like ext3 can do. :)
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] FS and backuppc performance

2009-03-18 Thread Pedro M. S. Oliveira
Hi all!

I'm running backuppc in several installations and sites and i'm very pleased 
with it, one of the sites has more than 3TB compressed data and about 6.000.000 
files. Backupps run very well fast and reliable. My question is about FS 
performance.
From what I've seen on the list there are some people using XFS, Ext3, and so 
on. What's your experience with the different file systems?
For now I'm using ext3 and I don't have much of problems with one exception, 
some time ago backuppc server was rebooted  for kernel and system security 
update (after being up for 8 months). And on boot a filecheck run on the 
backuppc data partition and it took almost 2 days to run, lots of inconsistency 
found and lots of corrections needed. Ext3 was running with noatime, 
nodiratime,and data mode is journaled. after that i tested some recoveries that 
went perfect and since then i don't have a prob, but to be sincere I didn't 
like to see the filecheck run like that and data getting corrupted like that 
too.

BTW I'm using 8 sata drives in a  hardware raid 5 (raid utilities say raid 
status is fine as well as the hard drives).
I'm a SuSE fan and for years I used reiserfs that i loved and never game me 
problems the problem is that reiserfs is not maintained as it used to be...

What FS do you guys use recommend/used and why?

Cheers 
Pedro
-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are
powering Web 2.0 with engaging, cross-platform capabilities. Quickly and
easily build your RIAs with Flex Builder, the Eclipse(TM)based development
software that enables intelligent coding and step-through debugging.
Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 38GB file backup is hanging backuppc (more info and more questions)

2009-02-12 Thread Pedro M. S. Oliveira
Hi!
I had this trouble some time ago while backing up vmware virtual machines (the 
files were extremely large about 120GB) and rsync would behave just like you 
are saying it happens to you. I had other smaller vms about 10 GB and those 
worked perfectly with rsync.
I did some research and from what i've found the clues were going to the rsync 
protocol itself while transferring large files.
I changed the transference mode from rsync to tar and since then I had no 
trouble.
Cheers 
Pedro  

On Wednesday 11 February 2009 22:41:03 John Rouillard wrote:
 Hi all:
 
 Following up with more info with the hope that somebody has a clue as
 to what is killing this backup.
 
 On Fri, Feb 06, 2009 at 03:41:40PM +, John Rouillard wrote:
  I am backing up a 38GB file daily (database dump). There were some
  changes on the database server, so I started a new full dump. After
  two days (40+ hours) it had still not completed. This is over a GB
  network link. I restarted the full dump thinking it had gotten hung.
  [...]
  What is the RStmp file? That one grew pretty quickly to it's current
  size given the start time of the backup (18:08 on feb 5). If I run
  file (1) against that file it identifies it as
 
 From another email, it looks like it is the uncompressed prior copy of
 the file that is currently being transferred.
 
   
 http://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg05836.html
 
 Now because of another comment, I tried to disable the use of RStmp
 and force BackupPC to do the equivalent of the --whole-file where it
 just copies the file across without applying the rsync
 differential/incremental algorithm. I executed a full backup after
 moving the file in the prior backup aside. So I moved 
 
   
 /backup-pc/pc/ldap01.bos1.renesys.com/326/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4
 
 to 
 
   
 /backup-pc/pc/ldap01.bos1.renesys.com/326/f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4.orig
 
 I claim this should make rsync diff against a non-existent file and
 thus just copy the entire file, but I still see in the lsof output for
 the BackupPC process:
 
   BackupPC_ 18683 backup8u   REG9,2 36878598144  48203250
   /backup-pc/pc/ldap01.bos1.renesys.com/new/f%2fdata%2fbak/RStmp
 
 and there is only 1 file that is that large (36878598144 bytes) under
 that volume and it is the id2entry.db4 file.
 
 So what did I miss?
 
 Does BackupPC search more than the prior backup? I verified that run
 327 (which is the partial copy) doesn't have any copy of:
 
   f%2fdata%2fbak/ffedora-ds/fcurrent/fuserRoot/fid2entry.db4
 
 in it's tree. So where is the RStmp file coming from?
 
 At this point it's been running about 24 hours and it has transferred
 only 10GB of the 30GB file.
 
 This is ridiculously slow. If my math is right, I should expect a
 36GByte file at a 1Mbit/sec rate (which is about what cacti is showing
 as a steady state throughput on the ethernet port) to transfer in:
 
   '36*1024*8/(3600*24)'= ~3.413
 
 so it will take 3 and a half days at this rate. This is with both
 systems having a lot of idle time and  1% wait state.
 
 If I run an rsync on the BackupPC server to copy the file from the
 same client, I get something closer to 10MBytes/second
 (80Mbits/sec). Which provides a full copy in a bit over an hour.
 
 Also one other thing that I noticed, BackupPC in it's 
 $Conf{RsyncArgs}  setting uses:
 
 
 '--block-size=2048',
 
 which is described in the rsync man page as:
 
-B, --block-size=BLOCKSIZE
   This forces the block size used in the rsync algorithm to a
   fixed value.  It is normally selected based on the size of
   each file being updated.  See the technical report for
   details.
 
 is it possible to increase this or will the Perl rsync library break??
 It would be preferable to not specify it at all allowing the remote
 rsync to set the block size based on the file size.
 
 I see this in lib/BackupPC/Xfer/RsyncFileIO.pm:
 
   sub csumStart
   {
 my($fio, $f, $needMD4, $defBlkSize, $phase) = @_;
 
 $defBlkSize ||= $fio-{blockSize};
 
 which makes it looks like it can handle a different blocksize, but
 this could be totally unrelated.
 
 I could easily see a 2k block size slowing down the transfer of a
 large file if each block has to be summed.
 
 Anybody with any ideas?
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
Create and Deploy Rich Internet Apps outside the browser with Adobe(R)AIR(TM)
software. With Adobe AIR, Ajax

Re: [BackupPC-users] sudoers

2009-01-23 Thread Pedro M. S. Oliveira
If your problem is just the permission and you already are using a copy of 
rsync why not use the suid flag ?
just be carefull with file permissions and ownship for security. just do the 
following on the machine you want to backup:

chmod 550 /path_to_copy_of-rsync/rsync
chown root:backuppc /path_to_copy_of-rsync/rsync
chmod u+s  /path_to_copy_of-rsync/rsync

this will make the command rsync to run as root although you used user backuppc 
to lunch it, once again remember the line:
chown root:backuppc /path_to_copy_of-rsync/rsync 
and be sure that no1 else is on group backuppc.

i use this setup to backup the backuppc server without using ssh (saving time, 
memory, and of course cpu cicles).
hope i helped
cheers,
Pedro Oliveira
On Thursday 22 January 2009 05:59:54 Adam Goryachev wrote:
 Terri Kelley wrote:
 
  On Jan 21, 2009, at 11:33 PM, Adam Goryachev wrote
 
 
  My mistake as well, I didn't read the rsync man page well enough either.
  Try this one:
 
  rsync -avz -e ssh -p 22 -l backuppc --rsync-path /usr/bin/sudo
  /usr/local/bin/backuppc-rsync
  myserver.domain.net:/root/backups /var/tmp/pwrnctmpback/rsyncmanual
 
  Yep, but resulted in the following:
 
  receiving file list ... rsync: push_dir#3 /home/backuppc/5 failed: No
  such file or directory (2)
  rsync error: errors selecting input/output files, dirs (code 3) at
  main.c(602) [sender=2.6.8]
  rsync: connection unexpectedly closed (8 bytes received so far) [receiver]
  rsync error: error in rsync protocol data stream (code 12) at io.c(463)
  [receiver=2.6.8]
 
  Again not a directory that I specified.
 
  That push is just killing me.
 
 I really don't know enough about what you are doing, nor about ssh/rsync
 to diagnose any further Hopefully someone else will jump in with
 more info.
 
 BTW, can you confirm that /usr/local/bin/backuppc-rsync is actually just
 a copy of /usr/bin/rsync ?
 
 Regards,
 Adam
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large Amounts of Data

2009-01-06 Thread Pedro M. S. Oliveira
In one of the installations of backuppc I manage there are 6 real servers and 
about 15 virtual servers and a storage total size combined about 10 TB and it a 
works great with backuppc.
On the servers running VMware I do a full backup every 30 days with a weekly 
incremental backup. On the vmware hosts we backup the we retain 2 full backups 
with daily incremental backups for 30 days also and the same apply to the real 
servers.
With pooling and compression we don't have more than 8 TB, the fs is reiserfs.
Usually backups run at night in the weekdays and freely on weekends. 

what take longer is the storage that has about 5TB and takes more than a day to 
do a full backup, i also allow multiple backups to run at the same time (10) 
this will allow to take full use of the gigacards, cpu, memory and cpu (it's a 
quad core) with 4 gb ram. if you do just a backup at a time you will waste your 
cpu as many of the files you do backup are small  and your hosts don't send 
enough info to saturate the backuppc server hardware. with lots of backuppc at 
the same time we have steady mdstat stats of 50-60 Mb/s written to disk.

the backuppc has sata drives with hardware raid 5.
so i can say i'm real happy with backuppc. right now i have 4 major backuppc 
servers running with multiple configuration, backups over internet (if you teak 
a bit ssh options you can compress data on the move). i also have some minor 
installations at my house for my data for instance. 
cheers
Pedro 

On Monday 05 January 2009 23:39:43 Christopher Derr wrote:
 We're a mid-sized university academic department and are fairly happy 
 with our current BackupPC setup.  We currently backup an 800 GB 
 fileserver (which has an iSCSI-attached drive), a few other sub-300 GB 
 fileservers, a bunch of XP desktops.  The 800 GB fileserver takes a long 
 time to back up...almost a full day, and I think this is normal for 
 BackupPC.  We'd like to use BackupPC to backup some of our heftier linux 
 servers -- moving into the multiple terabyte range (5-10 TB).  We're 
 considering a ZFS filesystem over gigabit for our backup target, but 
 obviously are concerned that backing up 5 TB of data would take a week.
 
 Is this where we should consider multiple BackupPC servers to break up 
 the backup time?  Should we move to a solution with less overhead (if 
 there is one)?  Thanks for any input or experiences.
 
 Chris
 
 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC working for a year, all of a sudden nothing works

2009-01-02 Thread Pedro M. S. Oliveira
The same happened to me and the strangest part is i could compile the kernel, 
and do all the memory tests just fine, as you i also had new ram so after 
changing lots of things like network cards, removing hardware and so one i 
changed the ram (the ones that all the tests said it was fine) and voilá... it 
started working like a charm.
The curious is it just stopped working while doing the backups, I manage 
several hosts ones with tar, other with rsync and some others with samba. 
I think the memory would only stop if working under stress for a long time (as 
the kernel compilation does) but with the kernel compilation maybe it wasn't 
long enough. As backuppc uses compression, encryption and high IO for a long 
time it probably stresses the computer more than the kernel compilation.

Probably the same is happening to you.
Cheers,
Pedro
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--

On Friday 02 January 2009 04:00:59 Gene Horodecki wrote:
 I've been backing up my house with backuppc a very long time and never 
 had any problems.  I was using the rsyncd method, but a couple months 
 ago it dawned on me that I wasn't getting all the files because of 
 permissions so I switched to the ssh/tar method and it continued to work 
 just fine.
 
 However, over the last little while things have stopped working.  I have 
 three 'hosts' that go overnight and not one of them works.  With the 
 ssh/tar method I get 'unexpected end of tar file' and with the ssh/rsync 
 (as opposed to rsyncd) method I get 'Got fatal error error during xfer 
 (aborted by signal=PIPE)'
 
 It almost seems to me like ssh is dying.  To make matters worse, 
 sometimes it takes my entire system down with it and spontaneously 
 reboots.  I do notice ssh using up almost 80% cpu while the backup is 
 going but otherwise I don't see anything abnormal, other then the system 
 going away...
 
 I've done the following two things in the last month:
 1) increased my RAM from 1G to 3G
 2) upgraded from ubuntu 7.10 to ubuntu 8.10
 
 I just tried upgrading backuppc from whatever version I have to the 
 newest one, and it's still doing the same thing.  Please help??!?
 
 Thanks.
 
 
 --
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC working for a year, all of a sudden nothing works

2009-01-02 Thread Pedro M. S. Oliveira
No, it was the RAM, I haded 4 hard after putting the new RAM it still works 
like a charm.
If it was a power matter the server would not work right with the extra 
consumption, and the machine as two power supplies.
And i didn't told but the ram was ECC.
Just changed it the RAM stick and it worked just fine.
Cheers
Pedro

On Friday 02 January 2009 15:41:23 dan wrote:
 That doesnt sound like faulty RAM was the base issue.  It sounds like a
 buggy power supply, which can slowly degrade system components.  The RAM may
 have been stressed by bad power and reached a point where it would no
 operate properly when the voltage fluctuated.  Replacing the RAM may just be
 putting in new RAM that can handle the power issues for a while.  You could
 see another component fail for no apparent reason if it is indeed the power
 supply.  You should really consider doing a load test on your power supply.
 There are some guides online for this, it can be done with a multimeter and
 something to put a load on the system like a TEC attached to a rheostat.
 
 This is one good reason to use ECC RAM.  The error checking circuitry is
 designed to function well outside the tolerances of the RAM it is attached
 to.
 
 On Fri, Jan 2, 2009 at 5:38 AM, Pedro M. S. Oliveira
 pmsolive...@gmail.comwrote:
 
  The same happened to me and the strangest part is i could compile the
  kernel, and do all the memory tests just fine, as you i also had new ram so
  after changing lots of things like network cards, removing hardware and so
  one i changed the ram (the ones that all the tests said it was fine) and
  voilá... it started working like a charm.
  The curious is it just stopped working while doing the backups, I manage
  several hosts ones with tar, other with rsync and some others with samba.
  I think the memory would only stop if working under stress for a long time
  (as the kernel compilation does) but with the kernel compilation maybe it
  wasn't long enough. As backuppc uses compression, encryption and high IO for
  a long time it probably stresses the computer more than the kernel
  compilation.
 
 
  Probably the same is happening to you.
  Cheers,
  Pedro
 
  --
  Pedro M. S. Oliveira
  IT Consultant
  Email: pmsolive...@gmail.com
  URL: http://pedro.linux-geex.com
  Cellular: +351 96 5867227
 
  --
 
 
  On Friday 02 January 2009 04:00:59 Gene Horodecki wrote:
   I've been backing up my house with backuppc a very long time and never
   had any problems. I was using the rsyncd method, but a couple months
   ago it dawned on me that I wasn't getting all the files because of
   permissions so I switched to the ssh/tar method and it continued to work
   just fine.
  
   However, over the last little while things have stopped working. I have
   three 'hosts' that go overnight and not one of them works. With the
   ssh/tar method I get 'unexpected end of tar file' and with the ssh/rsync
   (as opposed to rsyncd) method I get 'Got fatal error error during xfer
   (aborted by signal=PIPE)'
  
   It almost seems to me like ssh is dying. To make matters worse,
   sometimes it takes my entire system down with it and spontaneously
   reboots. I do notice ssh using up almost 80% cpu while the backup is
   going but otherwise I don't see anything abnormal, other then the system
   going away...
  
   I've done the following two things in the last month:
   1) increased my RAM from 1G to 3G
   2) upgraded from ubuntu 7.10 to ubuntu 8.10
  
   I just tried upgrading backuppc from whatever version I have to the
   newest one, and it's still doing the same thing. Please help??!?
  
   Thanks.
  
  
  
  --
   ___
   BackupPC-users mailing list
   BackupPC-users@lists.sourceforge.net
   List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
   Wiki: http://backuppc.wiki.sourceforge.net
   Project: http://backuppc.sourceforge.net/
  
 
 
  --
 
 
 
 
  --
 
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/
 
 
 

-- 
--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227

Re: [BackupPC-users] Permission denied during backup

2008-12-19 Thread Pedro M. S. Oliveira
Usually backuppc is not run by root but by it's onw user. as you are backing up 
the localmachine you don't need ssh to do the file transfer.
Just copy rsync to /opt/BackupPC/bin
chown root:backuppc  /opt/BackupPC/bin/rsync
chmod 550 /opt/BackupPC/bin/rsync
chmod u+s  /opt/BackupPC/bin/rsync

this way you'll have your backups done fast and simple in your local machine.
cheers 
Pedro

--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--

On Thursday 18 December 2008 20:56:45 Glassfox wrote:
 
 Hello, I want to backup my localhost completely and just backup the root 
 folder with some excludes (proc, sys, media and backuppc pool folder). Backup 
 was successful, but if I look at the error log file there are a lot of 
 Permission denied errors for the home folders, some files in the /root/ 
 folder and some other folders. What is the best way to get also all this 
 files/folders backuped?
 
 Thanks.
 
 +--
 |This was sent by wild...@yahoo.de via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 --
 SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
 The future of the web can't happen without you.  Join us at MIX09 to help
 pave the way to the Next Web now. Learn more and register at
 http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Permission denied during backup

2008-12-19 Thread Pedro M. S. Oliveira
Usually backuppc is not run by root but by it's onw user. as you are backing up 
the localmachine you don't need ssh to do the file transfer.
Just copy rsync to /opt/BackupPC/bin
chown root:backuppc  /opt/BackupPC/bin/rsync
chmod 550 /opt/BackupPC/bin/rsync
chmod u+s  /opt/BackupPC/bin/rsync

this way you'll have your backups done fast and simple in your local machine.
cheers 
Pedro

--
Pedro M. S. Oliveira
IT Consultant 
Email: pmsolive...@gmail.com  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--

On Thursday 18 December 2008 20:56:45 Glassfox wrote:
 
 Hello, I want to backup my localhost completely and just backup the root 
 folder with some excludes (proc, sys, media and backuppc pool folder). Backup 
 was successful, but if I look at the error log file there are a lot of 
 Permission denied errors for the home folders, some files in the /root/ 
 folder and some other folders. What is the best way to get also all this 
 files/folders backuped?
 
 Thanks.
 
 +--
 |This was sent by wild...@yahoo.de via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--
 
 
 
 --
 SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
 The future of the web can't happen without you.  Join us at MIX09 to help
 pave the way to the Next Web now. Learn more and register at
 http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc on the same (server) system?

2008-12-10 Thread Pedro M. S. Oliveira

Hi, 
To do this i recomend you the following:
make a copy of rsync to your backuppc bin directory
cp rsync /opt/Backuppc/bin
chown root:backuppc /opt/Backuppc/bin/rsync 
chmod 750 /opt/Backuppc/bin/rsync
chmod u+s /opt/Backuppc/bin/rsync 
then in the host backuppc config change the line that specifies the $rsync path 
and change it to  /opt/Backuppc/bin/rsync
in the host backuppc config change the line that specifies the copy/restore 
comand erasing everything till the var $rsync

what can you achive with this?
well you'll have a much faster/efficient backup because you are not encryting 
all the information for your localhost host. has you are local you won't need 
to have a tcp and security layer involved in the end you backup can run about 
4-10 times faster depending on your cpu/memory.

you may think this is dangerous in a security perspective but just certify that 
no1 else belong to group backuppc other than the backuppc user.

Cheer hope I help a bit.
--
Pedro M. S. Oliveira
IT Consultant 
Email: [EMAIL PROTECTED]  
URL:   http://pedro.linux-geex.com
Cellular: +351 96 5867227
--

On Tuesday 09 December 2008 21:52:40 Glassfox wrote:
 
 Hello, I'm new to linux and looking for an overall backup tool for a Linux 
 server. 
 
 My first question is: can backuppc make a backup of a system it is running on 
 itself? Or do I always need to have a second backup host to make a backup of 
 my Linux server? 
 
 And my second question is: could I get a corrupt system backup if the web 
 applications and databases were not shut down during the backup process? 
 
 Thanks.
 
 +--
 |This was sent by [EMAIL PROTECTED] via Backup Central.
 |Forward SPAM to [EMAIL PROTECTED]
 +--
 
 
 
 --
 SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
 The future of the web can't happen without you.  Join us at MIX09 to help
 pave the way to the Next Web now. Learn more and register at
 http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keeping a specific full backup

2008-11-17 Thread Pedro M. S. Oliveira
You can also create a new host just like that and put the original one with 
schedule/enable backups=1 or 2 
That way it wont keep doing backups.

cheers,
Pedro
On Thursday 06 November 2008 07:38:26 pete davidson wrote:
 Hi all
 
 I recently restored a crashed mac harddrive from a backuppc full backup
 (phew..).  I'd like to keep that particular full backup (so if I later
 realize I actually needed some file I thought I didn't need in the original
 restore it's still there).  Is there a way to change my current config.pl
 settings or individual [hostname].pl settings to not overwrite that
 particular backup (I'm using the default settings at the moment for how
 often and how many full  incremental backups to keep)?
 
 Many thanks in advance
 
 Pete
 

-- 
--
Pedro Oliveira
IT Consultant 
Email: [EMAIL PROTECTED]  
URL:   http://pedro.linux-geex.com
Telefone: +351 96 5867227
--
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Any plans to upgrade File::RsyncP to version 30?

2008-11-06 Thread Pedro M. S. Oliveira
Hi,
I had some difficulties with rsync and ssh like you, what i saw is that the 
probs were on ssh.
 the server hosting backuppc has 16 cores, 4 processors what happend was that 
the ssh would jump from core to core and sometimes it went down on that process 
(the prob was described in ssh dev mail lists), i used ssh OpenSSH_4.2p1, 
OpenSSL 0.9.8a 11 Oct 2005
in sles 10 sp2 x86_64

as i need support from novel i couldn't remove ssh and upgrade so what i did 
was compile a new version and put it on another location just to use with 
backuppc, since i did it never had problems again. with backuppc i use:
OpenSSH_5.1p1, OpenSSL 0.9.8a 11 Oct 2005
in sles 10 sp2 X86_64

cheers 
pedro

 
On Thursday 06 November 2008 16:24:42 Jeffrey J. Kosowsky wrote:
 As I outlined in my earlier
 message(http://sourceforge.net/mailarchive/message.php?msg_name=18693.62814.802874.426715%40consult.pretender),
 it appears that some (if not a lot) of the difficulty with 
 usingOpenSSH_4.2p1, OpenSSL 0.9.8a 11 Oct 2005
 rsync/ssh with Windows is due the version 28 limitation of
 File::RsyncP.
 
 - Are there any (near term) plans to update?
 - How challenging would such an update be?
 - Are the changes between protocol 28 and 30 documented anywhere?
 
 Thanks
 
 
 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  win great prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 

-- 
--
Pedro Oliveira
IT Consultant 
Email: [EMAIL PROTECTED]  
URL:   http://pedro.linux-geex.com
Telefone: +351 96 5867227
--
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Problem with CPOOL

2008-10-30 Thread Pedro M. S. Oliveira
Hi guy,
Apparently i did something really stupid, forgeting the way backuppc works and 
after a few hour struggling with another problem, in the middle of the night 
and a bit tired my brain had this great idea... why not speeding up the linking 
process by putty cpool in another fs?
i know all of you will say that can't be done because of the hard links and so 
on, while moving the data i remembered that and stop the whole process. then 
moved back the data to the original cpool dir and fs. the prob is that i didn't 
move it like hard links but as files. 
since then i can't do backups for two of my machines.
what can i do to solve this? should i erase all the backups for those machines 
and start over, can i just delete the cpool and wait it to be recreated? i 
don't mind losing those backups but if i don't have a chance no1 will die.
thanx
pedro


-- 
--
Pedro Oliveira
IT Consultant 
Email: [EMAIL PROTECTED]  
URL:   http://pedro.linux-geex.com
Telefone: +351 96 5867227
--
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Little thoughts to share - RSYNC MAC SSH

2008-10-28 Thread Pedro M. S. Oliveira
Hi all!

For some time I had trouble with rsync and ssh.
My problem was that the ssh connection would stop after a few seconds
with this error in backuppc log:

Got fatal error during xfer (fileListReceive failed)

and in the /var/log/messages:

Disconnecting: Corrupted MAC on input.

After searching for a while and doing some digging i found that i had
files that would cause ssh to exit, usually you can exit ssh with ~.
and in fact i had files with that name and content.
what i did on backuppc config page was:
on the main configuration editor, xfer:
RsyncClientCmd
$sshPath  -q -x -l root $host $rsyncPath $argList+
changed to
$sshPath -e none -q -x -l root $host $rsyncPath $argList+
RsyncClientRestoreCmd
$sshPath -q -x -l root $host $rsyncPath $argList+
changed to
$sshPath -e none -q -x -l root $host $rsyncPath $argList+

this will avoid escape caracters.

what do you guys think on putting this by default on the next release?

cheers,
Pedro

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/