Gerald,
Just bumping this up a bit. Any ideas?
Have you tried rebuilding the backups file with bin/BackupPC_fixupBackupSummary?
Craig
--
The Next 800 Companies to Lead America's Growth: New Video Whitepaper
David G.
Andreas writes:
At our site files larger than 10BG are usually recreated faster than
restored from backup, therefore we added to the RsyncExtraArgs the parameter
--max-size=100.
Unfortunately rsync implements that on the receiving side (ie: server),
and File::RsyncP doesn't implement
Frank,
Anyway, I thought I had it all figured out, but when I decrypt, gunzip, and
untar the resulting file, I get some tar: Skipping to next header messages
in the output, and, although I do get some files out of the archive,
eventually tar just hangs.
Does the original tar archive
Michael,
BackupPC stores file names and log contents using utf8. Either
the Xfer method or BackupPC needs to convert to/from the client's
charset.
The Xfer method might or might not convert the client's charset to
utf8. By default rsync/rsyncd will not. By default smb will, but
it can be
itismike,
If you have something to contribute to it, please do so!
Thanks for the perspective and the encouragement. How does one go
about requesting editor permissions on the wiki?
You need a SourceForge account. Just email your SF user name to me.
I'd love to completely open up the
itismike,
Thanks Craig. Not sure what your email account is as you appear as a 'guest'
in this forum. My sourceforge name is the same as my username here.
I'm not sure about your email either. You now have edit admin
privileges.
Craig
Tod writes:
What was the reason the old script was removed?
It's relatively complex and I haven't invested the effort to
create a test harness for it. Plus I don't plan to support
it for 4.x.
Could I download an old version of backuppc and use the old script
on the newer pool?
Yes, that
Robin writes:
I have four hosts with identical configuration, as far as I know.
All of them have:
$Conf{MaxBackupPCNightlyJobs} = 8;
On one, and only one as far as I can tell, running:
sudo -u backuppc BackupPC_serverMesg BackupPC_nightly run
results in:
$ ps -aef |
Tod writes:
The changelog states that BackupPC_compressPool was removed in
3.2.0beta0. Is there a replacement for this? I have an uncompressed
pool I would like to convert to compressed.
Unfortunately not. You can restart BackupPC with compression
turned on. Eventually (when the
Robin,
Is the format written up anywhere?
Yes, it's in the documentation:
http://backuppc.sourceforge.net/faq/BackupPC.html#storage_layout
Scroll down to backups.
Craig
--
Beautiful is writing same markup.
Robin writes:
We add a lot of stuff automatically to our backuppc configs, and
manually going into the UI and doing the config reload is easy to
forgot. Can it be done on the command line without breaking any
backups (i.e. without restarting)?
Run this command:
Robin writes:
rsync --fuzzy seems to break BackupPC; fileListReceived breaks.
Could that be fixed easily?
Sorry, but no. In 4.x I hope to implement solution that will
be better than even --fuzzy, although it might not be included
in the initial release. Since 4.x uses (almost) native rsync
Garith,
Yes, you are seeing a problem in a different part than where 3.2.0
was fixed.
We need to find out why one of the metadata valus is negative, which
causes pack() to fail. BackupPC expects the type, mode, uid, gid and
size to all be non-negative numbers. I can't immediately see from the
Garith,
As the subject suggestions I'm getting an error on a host I'm attempting to
backup. The exact error as it stands in the error log shows (server name
removed):
2010-09-01 12:08:28 serverhostname: Cannot compress negative numbers in
pack at
Stephen,
I am running BackupPC 3.1.0 on Nexenta. It seems to be working for the
most part. I am having a problem with BackupPC_tarCreate. I am trying
to create a tar of a 30gb backup. The tar I create ends up being 30gb,
but when extracted it only takes up 5gb and is missing a lot of
Jeffrey writes:
True - I haven't seen any mention in the documentation of any 'flag'
that would send checksums.
There is an rsync option --checksum that will compute and send a
full-file MD5 digest from the client for every file as part of the
initial file list. It is there as an alternative
Jonathan,
Is there a permissions issue that somebody needs to fix or did the wiki
become closed?
The wiki changes on SF require explicit permission.
Please send me your SF login name and I will give you
edit permission.
I encourage people to fix and update the wiki - after the
botched port
Trey writes:
I should probably look back at the archives for this, but no
hardlinks? I thought that was the way the pool was able to save disk
space -- an essential feature (at least for us). Is this going to be
implemented in some other way?
Yes, pooling continues to work. Reference
Saturn2888 writes:
To make it easier for people to know what I'm talking about, here are the
errors I'm getting in my logs. It's a bunch of things like these for
different folders on different hosts.
2010-07-28 02:40:03 ERROR: opendir
Martin writes:
And what about rsync3 support in File::RsyncP ?
There's progress. Most of my BackupPC work recently has been on 4.x,
the next major version. Several months ago there were various emails
about the architecture. Yes, it will support rsync 3.x and extended
attributes. I've
BackupPC 3.2.0 has been released on SF.net.
This release contains a few new features and significant number of
bug fixes. Some of the new features are:
* Added FTP xfer method, implemented by Paul Mantz.
* Moved setting of $bpc-{PoolDir} and $bpc-{CPoolDir} after the config
file is read in
Marcelino writes:
The message is from /usr/share/backuppc/lib/BackupPC/Lang/en.pm. and
says something like this:
Your PC (user) has not been successfully backed up for 14760.2 days.
I noticed the message comes from $days but I can't figure out how it's
calculating it wrong...
14760 days
Marcelino writes:
Backupps file says this:
0 partial 1275390002 0 0 0 0 0 0 0 0 0 0 3 0 0 0 1 smb 0 3.1.0
Somehow an incomplete backups file got written. The 4th field is the
end time, which is 0 in this case.
Since no files we backed up I'd recommend removing the 0 directory
and the
Bruce,
BackupPC_serverMesg status info
The command just hangs. The strace information I provided shows that
it's hanging when trying to read input from the backuppc daemon.
By default the BackupPC programs use a unix-domain socket to
communicate with the BackupPC server. Perhaps the
Jaco writes:
This is what I get on line 64 in config.pl:
$Conf{BackupFilesExclude} = '['/']';
As other people pointed out the quoting is wrong. You should
have something like this:
$Conf{BackupFilesExclude} = ['/'];
But note that will exclude all the files.
Craig
Andy
Unterminated compressed integer in unpack at
/opt/backuppc/lib/BackupPC/Attrib.pm line 257.
It sounds like an attrib file was not compressed properly or is
corrupted some how.
In 3.2.0beta there is a fix that makes this error non-fatal,
so at least the backups should continue. You can
shalauras writes:
I'm making backups from some servers daily, but i have a problem with one
server.
The client take up around 50G, but almost all in one directory(49G)
When i try make a backup, this give me an error when trying to copy the
directory with 49G
Out of memory during large
Gerald writes:
Has anybody looked into using BackupPC with lessfs?
http://www.lessfs.comhttp://www.lessfs.com/
Overall lessfs looks promising.
Yes, I have looked into it, although I tested it with native rsync not
BackupPC. I did some benchmarking of lessfs with rsync to get an idea
of the
Les writes:
I think this will need some sort of atomic-operation reference count
manager at the heart of things so you don't do those lookups except as a
repair step.
Yes.
Will there be a way to disable the de-dup operations completely in the
case where the underlying filesystem does it
Jeffrey writes:
Sounds like a lot of work though and kudos to you to committing to
such an extensive upgrade -- are you going to have the time and
resources to get this done in 2010 (or 2011 or 2012 ;)?
Not sure - hopefully this year.
One more question if you don't mind...
Can you give an
Jeffrey,
Thanks for the suggestions. I've since decided to eliminate
hardlinks altogether in 4.x. This is an aggressive design step,
but if successful it will resolve many of the annoying issues
with the current architecture. (To be clear, hardlinks are
still needed in certain cases, since
Jeffrey,
Hopefully, it will also broaden the usability of BackupPC to
filesystems and OS's that don't support hard links since it seems like
you will be eliminating just about all the filesystem-specific
requirements.
The file system will still need hardlink support, but as I said
only for
Jeffrey,
Yes, read-only FUSE is a good idea.
I previously prototyped a read/write FUSE on top of BackupPC pooling
to support native rsync and tar, but the performance was quite poor
so I abandoned that approach for 4.x.
The BackupPC::View API hasn't changed a whole lot (and it supports both
4.x
Johan,
Thanks for your help debugging this!
It turns out it is a bug in Archive::Zip 1.30 (the latest version).
Earlier versions (eg: 1.23) are ok.
I submitted a bug report and patch for Archive::Zip on cpan.org:
https://rt.cpan.org/Ticket/Display.html?id=54827
If there is a need I could
Huw Wyn writes:
I'm loosing it here Les :( OK, if I understood what you meant correctly,
something like:
/usr/local/BackupPC/bin/BackupPC_zipCreate -c 4 -h fshac1 -n 530 -s
/home/hefinw /backup/hefin.zip
should work. Given the location of the directory
Joe,
Yes, you have the IO::Dirent problem, and it sounds like it is
fixed, since BackupPC_nightly reports non-zero information.
However, it doesn't report that it removed any files, so every
cpool file (at least for the output you included) has at least
2 links.
However, BackupPC_trashClean
John writes:
The reason for the current layout is to allow the existing critical
backups to complete even if it takes days to backup the newer shares
(filesystems). So it's a temporary thing, but I could see somebody
wanting to put /home first to prioritize it's backup over say /.
Is the bug
John writes:
2010-02-12 18:34:33 unexpected repeated share name / skipped
It's a bug (also reported by Tim Connors) that I meant to fix
in 3.2.0beta1 but somehow overlooked.
As a workaround, try putting / first in the list.
Craig
Matthias writes:
I believe that RsyncFileIO.pm decompress a file into share/RStmp and than
compare it (rsync) with the file from the client.
Yes, but only if the file is bigger than 16MB (otherwise
it just does it in memory) and delays doing it until it
knows there is a difference in the file.
Paul writes:
I know this is been asked a lot from time to time. But does any one
know when someone !! will alter backuppc to use rsync 3 natively rather
than use the perl plugin which is on 2.8 ??
Are there any realistic plans to do that?
Yes, it's being worked on, but still quite some
Pascal writes:
I am using backuppc for about 400 PC which are on a samba domain with
an LDAP backend.
For each PC, we have defined a user by his uid, but when a mail is
send, it's send with u...@def.domain. On our LDAP, the mail attribute is
used for that.
I patched BackupPC_sendEmail
Johan writes:
The user that has reported me the error on the bugzilla did that also,
the error is:
Can't locate object method tell via package IO::Handle at
/usr/lib/perl5/vendor_perl/5.10.0/Archive/Zip/Member.pm line 746.
Interesting. It appears Archive::Zip is trying to seek() the output
Johan,
I looked at the small broken zip file but couldn't determine
why it wasn't written correctly.
Just to confirm - BackupPC_zipCreate does work correctly when
you run it manally with compression turned on?
Next, I would suggest looking in lib/BackupPC/CGI/Restore.pm:
#
#
Don writes:
Every night since, it removes the last evenings backup, and does a
new full, still labeled 0.
fsck comes back clean. even when run forced.
Is there some way that $Conf{FullKeepCnt} got set to 0?
Please check both the main and per-host config files.
Craig
Sorin writes:
full backup started for directory /etc
Running: /usr/bin/ssh -q -x -l root xxx.yyy.zzz.nnn
/usr/sbin/rsync --server --sender --numeric-ids --perms --owner --group -D
--links
--hard-links --times --block-size=2048 --recursive --ignore-times . /etc/
Xfer PIDs are now 16088
Don writes:
The numbered directories are gone. I only have 0 from last nights backup,
which is 0.3 days old, and it replaced the one from the day before..
Please look in the host's LOG file. Do you see messages like:
2005-02-06 11:20:55 removing old full backup 0
If not, then it
Leandro writes:
I recently installed backuppc onto a new server and mounted /var/lib/backuppc
from a remote NAS where all the backup data is stored. I can see all the old
backups for the all my hosts except one. This one host ran a scheduled backup
last night which started as backup # 0. I
Johan,
Have you looked at the contents of the small zip file that
is downloaded with zip compression turned on?
Perhaps it contains an error message or html text of some kind.
Also, is there anything in the LOG file?
Craig
Jeff writes:
The md5sum for the root level attrib (i.e., the attrib file at the
level of pc/machine/attrib) is wrong if:
1. There are at least 2 shares
2. The attrib entries for each of the shares has changed since the
last backup (e.g., if the share directory has it's mtime
Jeff,
Yes, time flies. Thanks for the reminder :).
I've made the attached changes to 3.2.0beta0 (not yet in CVS), and the
one thing I'd still like to fix is ftp. I've given up on Zmanda doing
that, so I'll take that up myself now.
Given there are quite a few changes (although minor) I would
Jeffrey writes:
In the past, people have mentioned that sometimes BackupPC_trashClean
seems to get hung and hog almost 100% of CPU.
This is quite insidious since what apparently happens is that when
trashClean has trouble removing a file(s), it just continues to loop
through trying.
Christian writes:
But it backs up all .vmdk files from the excluded directory above
This is due to the behavior of smbclient. Here's an explanation
of how to make it work.
Craig
-- Forwarded message --
To: o...@jltechinc.com o...@jltechinc.com
From: Craig Barratt cbarr
Michael writes:
TextFileWrite: Failed to verify /etc/BackupPC/config.pl.new
in the browser under the main Hosts Xfer etc area.
I check the filesystem and see the file re-created:
# ll conf*
-rw-r- 1 backuppc apache 78506 Nov 10 12:24 config.pl
-rw-r- 1 backuppc apache 0
Jeffrey,
I'm not sure what is going on here. But the example you show appears to
show different files from the ones in /c/cygwin/usr/share/man/man1.
The files you copied with rsync are:
ls -inl
total 184
1130522 -rw-r--r-- 2 501 501 78716 Jul 24 08:57 gawk.1
1130521
Jeffrey writes:
Specifically, the following works
$Conf{DumpPreShareCmd}= 'mycommand $share $share/myfile';
And indeed $share is substituted properly.
but the following does not...
$Conf{DumpPreShareCmd}= 'mycommand $share ${share}/myfile';
And instead it passes ${share} for the
Pat,
Running: /usr/bin/smbclient law98\\C\$ -U price -W central -E -d 3
-c tarmode\ full -Tc -
You have debugging turned on (-d 3). Every debug message is
considered an error - unless BackupPC recognizes the output, it
assumes it is an error.
So you should turn debugging off.
Craig
Pat writes:
I have just getting a back up server going but the problem is that I
am getting the follwoing error (Backup aborted (Too many smbtar errors
(50001))). I'm not seeing this in docmentation, but I am seeing some
reports of this in some mailing groups. anyone got any ideas about
Mirco writes:
Try running a Windowns error check on that disk.
but is there a reason i don't understand?
I must suppose that the disk is damaged?
It is used every day (acts as File Server) without problems...
I've seen cases where it was disk corruption that causes that error
from
Mirco writes:
2009-10-18 14:33:44 Got fatal error during xfer (SUCCESS - 0 opening remote
file \remote\path\0 (\remote\path\))
Try running a Windowns error check on that disk.
Craig
--
Come build with us! The
Peter writes:
Tyler J. Wagner wrote:
Isn't this just a matter renaming/moving directories in
/var/lib/backuppc/pc?
mv /var/lib/backuppc/pc/hostname2/0 /var/lib/backuppc/pc/hostname1/31
etc. Plus the logs.
Tyler
I will try that. Thanks for the tip.
You have to do more than
Jeffrey writes:
So, I guess I am asking whether the maintainer(s) of the Wiki are
aware of the problem and if so is there a solution in progress?
If not, what do we need to do to restore the wiki?
Given the number of FAQ posts that take up a good portion of the
bandwidth of this group,
Alexander writes:
I just added a new client to the hosts file, but I keep getting this
error No backup directory /var/lib/backuppc//pc/server-wina/new... Why
is this happening?
This means the backup failed very early in the process.
Sending args: --server --sender -v --ignore-times . .
Steven writes:
could this have to do with the fact that OSX swtiched the default tar
from gnutar to bsdtar with Snow Leopard?
http://discussions.apple.com/thread.jspa?threadID=2144311tstart=0
I think gnutar is still there, you might just have to change the
arguments for calling.
Thanks,
Barb writes:
I too am having problems with backups on Snow Leopard. I am also
using tar over ssh, and my error logs look pretty much the same as
Jonathan's. At the end it nicely says No files dumped for share /
and shows the backup as failed. It does, however, backup the new
files I've
Lorrin writes:
tarExtract: Done: 1 errors, 30960 filesExist, 15823490040 sizeExist,
15823490040 sizeExistComp, 34851 filesTotal, 17143873836 sizeTotal
Got fatal error during xfer (No files dumped for share /)
TarShareName is set to /
Do you know what the 1 error is? Check the per-PC log
Bruno,
The communication with the BackupPC server happens via a unix-domain
socket. That's a special file that allows the server (BackupPC) to
listen to connections from clients (eg: CGI via apache/BackupPC_admin
or BackupPC_serverMesg or BackupPC_dump).
In non-FHS installations the socket is
Jeffrey writes:
Unless I missed a post, I don't see how there is enough data to answer
your question whether it is broken or not. It's possible and perhaps
even probable that you mistakenly created shares instead of excludes
(which is what the end result looks like) or there could be
Davide,
The basic problem with putting perl expressions in the config file
(to make settings cumulative) is that it isn't compatible with the
CGI config editor. The config editor will get the intended computed
values, but when it re-writes the config file it will just write the
values, not the
James writes:
I'm reposting this as I evidently tried to steal a thread before.
Since upgrading a very busy BackupPC server to 3.1, it's been falling
farther and farther behind due to disk contention between the nightly
admin jobs and backups which ran 24x7 on the 2.x set up. I asked for
Steve writes:
Is there a parameter that sets priority of once backup over another,
or do all the BackupPC_dump processes start at the same level? Maybe
that would be a $Conf that could be added...
In 3.2.0beta0 there is for admin jobs, but not backups.
See $Conf{CmdQueueNice}.
Craig
Jeffrey writes:
In general, using FullKeepCnt and IncrKeepCnt (and associated
variables) works well to prune older backups.
But sometimes there is a *specific* older backup that you want to hang
onto because it has some crucial data (or is a 'better' snapshot). It
would be great if you
Christoph,
So it seems that the problem is the same with the tcp connection as with the
unix socket.
Do you have any further idea what the problem could be?
Does anybody know why the read() system call might hang?
Most likely the BackupPC server doesn't see the first message
from
Christoph,
It's possible unix-domain sockets don't work correctly on
your system. BackupPC_serverMesg is blocked waiting for
a reply from the BackupPC server.
It's more likely that TCP sockets work.
Try setting $Conf{ServerHost}, $Conf{ServerPort} and
$Conf{ServerMesgSecret}.
Jim writes:
I get decent transfers with actual rsync, but File::RsyncP has some
serious design issues (see my other post with profiling information
titled File::RsyncP issues). Is the author of that module (Craig
Barratt) still around and/or maintaining it?
Yes - I also responded to your
Holger writes:
I believe Craig is researching other alternatives (a fuse FS to handle
compression and deduplication, so BackupPC could, in fact, use native rsync).
If that proves unviable, upgrading File::RsyncP to protocol version 30 would
probably be next. But File::RsyncP is open source,
.
Craig
-- Forwarded message --
To: rs...@lists.samba.org
Subject: Rsync performance increase through buffering
From: Craig Barratt cr...@atheros.com
Message-ID: auto-02369...@atheros.com
Date: Sun, 08 Dec 2002 23:48:57 -0800
I've been studying the read and write buffering
Matthias writes:
Every day I get a message in __LOGDIR__/LOG:
2009-08-12 02:35:49 Cpool is 322.19GB, 1142028 files (860 repeated, 31 max
chain, 11424 max links), 4369 directories
What is the meaning of:
repeated
The total number of pool files with hash collisions. Since the hash is
not
SourceForge has completed the transition to MediaWiki, but it
looks like some of the formatting is messed up. Hopefully all
the content is there, but I haven't checked.
I'd appreciate if people could look at pages they have edited or
created to get them back to their former state. SF was meant
SourceForge has discontinued Wikispaces, and they are migrating
the wiki pages to MediaWiki or Trac.
I've have a support request into SF for them to port the pages
to MediaWiki. Once that's done the Wiki link should work again.
I have a snapshot of the final Wikispaces pages, so if it doesn't
Kanwar writes:
full backup started for directory /home; updating partial #147
started full dump, share=/home
Running: /usr/bin/sudo /usr/bin/rsync --server --sender --numeric-ids
--perms --owner --group -D --links --hard-links --times
--block-size=2048 --recursive --checksum-seed=32761
Christian writes:
Hi,
I'm experiencing some strange difficulties with BackupPC
(3.1.0-3ubuntu1 on Ubuntu 8.04 LTS). It appears that BackupPC is not
recognizing that it put files into the pool already. The log shows
nightly a message according to which the pool is 0 GB, consisting of 0
ummmax writes:
So, description of a /backuppc/pc/$host/backups log, could anyone
give me a hand ? First two columns make sense and figured out: 3
(Start Time), 4 (End Time), 6 (size) and the rest ? :)
Look in the documentation.
Craig
gimili writes:
I switched back from tar to rsync. It sounds like rsync is far
superior. I ran a full backup which took 302 minutes and then an
incremental which only took 26 minutes. So it seems like things are
working now as the incremental was much quicker. I am not sure what
happened
Matthias writes:
I use backuppc 3.1.0-4lenny1 (Debian).
I have a bash script, run as DumpPreShareCmd, which should return 0 or 1.
Unfortunately
BackupPC didn't receive the return codes from my script.
For test I have made a script which contains only two lines:
#! /bin/bash
exit 0
Axel,
I can also reproduce this problem on the commandline, so it looks to me
to as if this could be a pure smbclient bug -- if there isn't something
bogus in the commandline options or parameters to smbclient:
backu...@backuppc:~ $ /usr/bin/smbclient fnord.example.net\\C\$ -U
error403 writes:
incr backup started back to 2009-06-26 22:53:27 (backup #0); for directory
/var/lib/backuppc/sshfsuser/
Xfer PIDs are now 30107,30106
/bin/tar: Substituting 1901-12-13 15:45 for unknown date format `2009-06-26\\
22:53:27'
You didn't include your $Conf{TarClientCmd}
Mirco writes:
Is there a way to retrieve at run time also other informations, to
pass as args to pass to my scripts?
I mean (for now :-D):
- user password (for smb XferMethod, defined in main config.pl or in
host.pl),
- values listed in $Conf{BackupFilesOnly}
To get that information you
Matthias writes:
I wonder that BackupPC knows how often a host was reachable or not but I
can't find a file where this information is stored.
The host status is stored the BackupPC server. It is periodically
saved to the $TOPDIR/log/status.pl file so that it can be restored
if you restart
Holger writes:
Craig was kind enough to unsubscribe him for that before; apparently
he has now resubscribed. Can you do it again, please, Craig?
Done.
Craig
--
___
Jason writes:
New to Backuppc. Is there a log for files that have been recovered or
restored?
If an individual file is downloaded (ie: if a user clicks on a
file name) or a zip or tar file is downloaded that is logged to
BackupPC's main log file. These aren't technically restores,
because
Hereward writes:
I'm looking for a way of extract details of backuppc jobs so that I
can parse them and produce some graphs of usage and general activity.
For current running jobs, host status, server info (pool stats etc)
and command queues you can do this:
bin/BackupPC_serverMesg status
G.W. Haywood writes:
Is there any chance that the daily digest could be, well, daily?
I just changed digest_size_threshhold from 30k to 500k.
Craig
--
OpenSolaris 2009.06 is a cutting edge operating system for
Interesting thread!
Jeffrey writes:
That being said, I agree that using a database to store both the
hardlinks along with the metadata stored in the attrib files would be
a more elegant, extensible, and platform-independent solution though
presumably it would require a major re-write of
Craig writes:
Do you have release notes posted anywhere describing the new features
included in this release 3.2.0beta0?
http://sourceforge.net/project/shownotes.php?release_id=673692group_id=34854
Craig
--
Register
Ralf writes:
thanks, this seems to solve the problem:
Sounds like you have the IO::Dirent + xfs problem. It's fixed
in 3.2.0 beta0.
Craig
--
Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT
is a
BackupPC users,
If you are interested in nominating your favorite open source
projects on SF.net please see below.
If that happens to be BackupPC, you can use this link:
http://sourceforge.net/community/cca09/nominate/?project_name=BackupPCproject_url=http://backuppc.sourceforge.net/
Boniforti writes:
Well, I'll have to wait until *all backups* are done. Do you think that
the graphics also will be updated? Because actually I see them not
corresponding to the pool size.
What is your new filesystem type? Do you have IO:Dirent installed?
IO:Dirent doesn't work correctly
John,
Would making the three buttons read something like:
INPUT TYPE=submit NAME=button VALUE=Start Incr Backup
INPUT TYPE=submit NAME=button VALUE=Start Full Backup
INPUT TYPE=submit NAME=button VALUE=Stop/Dequeue Backup
with support in the cgi for detecting
iazmin writes:
I installed everything provided in
http://www.inreto.de/dns323/fun-plug/0.5/extra-packages/perl/. So the perl
version is 5.10-2 while for backuppc I am using 3.1.0 version.
At line 1635 of bin/BackupPC, try replacing this:
foreach my $host ( sort(HostSortCompare
iazmin writes:
I am using ffp 0.5 on my D-Link DNS-323 NAS.
I am using this wiki http://wiki.dns323.info/howto:backuppc to install
backuppc.
I am still unable to start the BackupPC to run on the device. The command
that I use to start backupPC is:
home/root#
501 - 600 of 1316 matches
Mail list logo