Rafael,

It got changed on this PR:
https://github.com/apache/cloudstack/pull/1749/files#diff-6eeb1a2fb818cccb14785ee80c93a561R560



On Tue, Jan 9, 2018 at 4:44 PM, Khosrow Moossavi <kmooss...@cloudops.com>
wrote:

> That is correct Mike. The quoted part above was misleading, it should have
> been "at any given point in time *when transaction finished*"
> Removal of "other" or "current failed" snapshot happens at the very end of
> the method. The state of SR throughout time would be something like:
>
> 1) snapshot-01 (at rest)
> 2) snapshot-01, snapshot-02 (while taking snapshot-02 on primary storage
> and sending to secondary storage)
> 3) snapshot-02 (at rest again, after successful)
> OR
> 3) snapshot-01 (at rest again, after failure)
>
>
> Khosrow Moossavi
>
> Cloud Infrastructure Developer
>
> t 514.447.3456 <(514)%20447-3456>
>
> <https://goo.gl/NYZ8KK>
>
>
>
> On Tue, Jan 9, 2018 at 4:33 PM, Tutkowski, Mike <mike.tutkow...@netapp.com
> > wrote:
>
>> “technically we should only have "one" on primary storage at any given
>> point in time”
>>
>> I just wanted to follow up on this one.
>>
>> When we are copying a delta from the previous snapshot, we should
>> actually have two snapshots on primary storage for a time.
>>
>> If the delta copy is successful, then we delete the older snapshot. If
>> the delta copy fails, then we delete the newest snapshot.
>>
>> Is that correct?
>>
>> > On Jan 9, 2018, at 11:36 AM, Khosrow Moossavi <kmooss...@cloudops.com>
>> wrote:
>> >
>> > "We are already deleting snapshots in the primary storage, but we always
>> > leave behind the last one"
>> >
>> > This issue doesn't happen only when something fails. We are not
>> deleting the
>> > snapshots from primary storage (not on XenServer 6.25+ and not since Feb
>> > 2017)
>> >
>> > The fix of this PR is:
>> >
>> > 1) when transferred successfully to secondary storage everything except
>> > "this"
>> > snapshot get removed (technically we should only have "one" on primary
>> > storage
>> > at any given point in time) [towards the end of try block]
>> > 2) when transferring to secondary storage fails, only "this" in-progress
>> > snapshot
>> > gets deleted. [finally block]
>> >
>> >
>> >
>> > On Tue, Jan 9, 2018 at 1:01 PM, Rafael Weingärtner <
>> > rafaelweingart...@gmail.com> wrote:
>> >
>> >> Khosrow, I have seen this issue as well. It happens when there are
>> problems
>> >> to transfer the snapshot from the primary to the secondary storage.
>> >> However, we need to clarify one thing. We are already deleting
>> snapshots in
>> >> the primary storage, but we always leave behind the last one. The
>> problem
>> >> is that if an error happens, during the transfer of the VHD from the
>> >> primary to the secondary storage. The failed snapshot VDI is left
>> behind in
>> >> primary storage (for XenServer). These failed snapshots can accumulate
>> with
>> >> time and cause the problem you described because XenServer will not be
>> able
>> >> to coalesce the VHD files of the VM. Therefore, what you are
>> addressing in
>> >> this PR are cases when an exception happens during the transfer from
>> >> primary to secondary storage.
>> >>
>> >> On Tue, Jan 9, 2018 at 3:25 PM, Khosrow Moossavi <
>> kmooss...@cloudops.com>
>> >> wrote:
>> >>
>> >>> Hi community
>> >>>
>> >>> We've found [1] and fixed [2] an issue in 4.10 regarding snapshots
>> >>> remaining on primary storage (XenServer + Swift) which causes VDI
>> chain
>> >>> gets full after some time and user cannot take another snapshot.
>> >>>
>> >>> Please include this in 4.11 milestone if you see fit.
>> >>>
>> >>> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-10222
>> >>> [2]: https://github.com/apache/cloudstack/pull/2398
>> >>>
>> >>> Thanks
>> >>> Khosrow
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Rafael Weingärtner
>> >>
>>
>
>

Reply via email to