** Description changed:
SRU justification : Race condition exist when two instances of duplicity run
in the same
cache directory (.cache/duplicity)
Impact : Potential corruption of the local cache
Fix : Add a lockfile in the local cache & prevent execution of a second
instance in the
case of the presence of the lockfile
- Test Case :
+ Test Case :
1) Run one instance of duplicity :
- $ sudo mkdir /tmp/backup
- $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp /
file:///tmp/backup
+ $ sudo mkdir /tmp/backup
+ $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp /
file:///tmp/backup
While this command is running execute the following in a separate console :
- $ sudo duplicity collection-status file:///tmp/backup
+ $ sudo duplicity collection-status file:///tmp/backup
With the new locking mechanism you will see the following :
$ sudo duplicity collection-status file:///tmp/backup
Another instance is already running with this archive directory
-
- To disable the locking mechanism, the --allow-concurrency option can be used :
- $ sudo duplicity --allow-concurrency collection-status file:///tmp/backup
- Local and Remote metadata are synchronized, no sync needed.
- Last full backup date: Mon Jan 20 17:11:39 2014
- Collection Status
- -----------------
- Connecting with backend: LocalBackend
- Archive dir: /home/ubuntu/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75
-
- Found 0 secondary backup chains.
-
- Found primary backup chain with matching signature chain:
- -------------------------
- Chain start time: Mon Jan 20 17:11:39 2014
- Chain end time: Mon Jan 20 17:11:39 2014
- Number of contained backup sets: 1
- Total number of contained volumes: 3
- Type of backup set: Time: Num volumes:
- Full Mon Jan 20 17:11:39 2014 3
- -------------------------
- No orphaned or incomplete backup sets found.
+ If you are sure that this is the only instance running you may delete
+ the following lockfile and run the command again :
+ /home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile
Regression : In the case of spurrious interruption of duplicity, the lockfile
will remain in .cache/duplicity which can prevent future use of duplicity.
The cache
- directory will have to be cleaned or the --allow-concurrency option will be
needed
+ directory will have to be cleaned as outlined in the error message
Original description of the problem :
When runnining "duply X status" while running "duply X backup" (sorry, I
don't know which duplicity commands are created by duply) due to a race-
condition the code of 'sync_archive' might happend to append newly
created meta-data files to 'local_spurious' and subsequently delete
them. This delete seems to have been the reason that triggered bug
1216921.
The race condition is that the backup command constantly creates meta-
data files while the status command queries the list of local and remote
files independently over a larger time span. This means that a local
file might already been remote but the status command did not see it a
few seconds ago.
--
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763
Title:
Race condition between status and backup
Status in Duplicity - Bandwidth Efficient Encrypted Backup:
In Progress
Status in “duplicity” package in Ubuntu:
In Progress
Status in “duplicity” source package in Precise:
In Progress
Status in “duplicity” source package in Quantal:
In Progress
Status in “duplicity” source package in Raring:
In Progress
Status in “duplicity” source package in Saucy:
In Progress
Status in “duplicity” source package in Trusty:
In Progress
Bug description:
SRU justification : Race condition exist when two instances of duplicity run
in the same
cache directory (.cache/duplicity)
Impact : Potential corruption of the local cache
Fix : Add a lockfile in the local cache & prevent execution of a second
instance in the
case of the presence of the lockfile
Test Case :
1) Run one instance of duplicity :
$ sudo mkdir /tmp/backup
$ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp /
file:///tmp/backup
While this command is running execute the following in a separate console :
$ sudo duplicity collection-status file:///tmp/backup
With the new locking mechanism you will see the following :
$ sudo duplicity collection-status file:///tmp/backup
Another instance is already running with this archive directory
If you are sure that this is the only instance running you may delete
the following lockfile and run the command again :
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile
Regression : In the case of spurrious interruption of duplicity, the lockfile
will remain in .cache/duplicity which can prevent future use of duplicity.
The cache
directory will have to be cleaned as outlined in the error message
Original description of the problem :
When runnining "duply X status" while running "duply X backup" (sorry,
I don't know which duplicity commands are created by duply) due to a
race-condition the code of 'sync_archive' might happend to append
newly created meta-data files to 'local_spurious' and subsequently
delete them. This delete seems to have been the reason that triggered
bug 1216921.
The race condition is that the backup command constantly creates meta-
data files while the status command queries the list of local and
remote files independently over a larger time span. This means that a
local file might already been remote but the status command did not
see it a few seconds ago.
To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions
--
Mailing list: https://launchpad.net/~desktop-packages
Post to : [email protected]
Unsubscribe : https://launchpad.net/~desktop-packages
More help : https://help.launchpad.net/ListHelp