On Mon, Sep 26, 2016 at 11:13 AM, Ilya Dryomov wrote:
> On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
On Thu, Sep
On Mon, Sep 26, 2016 at 8:39 AM, Nikolay Borisov wrote:
>
>
> On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
[snipped]
On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
>>>
>>> [snipped]
>>>
>>> cat /sys/bus/rbd/devices/47/client_id
>>> client157729
>>> cat
On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
>>
>> [snipped]
>>
>> cat /sys/bus/rbd/devices/47/client_id
>> client157729
>> cat /sys/bus/rbd/devices/1/client_id
>> client157729
>>
>> Client
On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
>
> [snipped]
>
> cat /sys/bus/rbd/devices/47/client_id
> client157729
> cat /sys/bus/rbd/devices/1/client_id
> client157729
>
> Client client157729 is alxc13, based on correlation by the ip address
> shown by the rados -p
On 09/15/2016 03:15 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 12:54 PM, Nikolay Borisov wrote:
>>
>>
>> On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
>>> wrote:
On 09/15/2016
On Thu, Sep 15, 2016 at 12:54 PM, Nikolay Borisov wrote:
>
>
> On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
>> wrote:
>>>
>>>
>>> On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
On
On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
> wrote:
>>
>>
>> On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
>>>
>>>
>>> On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
On Wed, Sep 14, 2016 at 3:30 PM, Nikolay
On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
wrote:
>
>
> On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
>>> On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote:
On 09/14/2016
On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
> On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
>>> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
On 09/14/2016 09:55 AM, Adrian
On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote:
>
>
> On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
>> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
>>>
>>>
>>> On 09/14/2016 09:55 AM, Adrian Saul wrote:
I found I could ignore the XFS
On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 09:55 AM, Adrian Saul wrote:
>>>
>>> I found I could ignore the XFS issues and just mount it with the
>>> appropriate options (below from my backup
On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 09:55 AM, Adrian Saul wrote:
>>>
>>> I found I could ignore the XFS issues and just mount it with the
>>> appropriate options (below from my backup
On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
>
>
> On 09/14/2016 09:55 AM, Adrian Saul wrote:
>>
>> I found I could ignore the XFS issues and just mount it with the appropriate
>> options (below from my backup scripts):
>>
>> #
>> # Mount with nouuid
> But shouldn't freezing the fs and doing a snapshot constitute a "clean
> unmount" hence no need to recover on the next mount (of the snapshot) -
> Ilya?
It's what I thought as well, but XFS seems to want to attempt to replay the log
regardless on mount and write to the device to do so. This
On 09/14/2016 09:55 AM, Adrian Saul wrote:
>
> I found I could ignore the XFS issues and just mount it with the appropriate
> options (below from my backup scripts):
>
> #
> # Mount with nouuid (conflicting XFS) and norecovery (ro snapshot)
> #
> if ! mount -o
gt; Subject: Re: [ceph-users] Consistency problems when taking RBD snapshot
>
> On Tue, Sep 13, 2016 at 4:11 PM, Nikolay Borisov <ker...@kyup.com> wrote:
> >
> >
> > On 09/13/2016 04:30 PM, Ilya Dryomov wrote:
> > [SNIP]
> >>
> >> Hmm, it could be a
On Tue, Sep 13, 2016 at 4:11 PM, Nikolay Borisov wrote:
>
>
> On 09/13/2016 04:30 PM, Ilya Dryomov wrote:
> [SNIP]
>>
>> Hmm, it could be about whether it is able to do journal replay on
>> mount. When you mount a snapshot, you get a read-only block device;
>> when you mount a
On 09/13/2016 04:30 PM, Ilya Dryomov wrote:
[SNIP]
>
> Hmm, it could be about whether it is able to do journal replay on
> mount. When you mount a snapshot, you get a read-only block device;
> when you mount a clone image, you get a read-write block device.
>
> Let's try this again, suppose
On Tue, Sep 13, 2016 at 1:59 PM, Nikolay Borisov wrote:
>
>
> On 09/13/2016 01:33 PM, Ilya Dryomov wrote:
>> On Tue, Sep 13, 2016 at 12:08 PM, Nikolay Borisov wrote:
>>> Hello list,
>>>
>>>
>>> I have the following cluster:
>>>
>>> ceph status
>>> cluster
On 09/13/2016 01:33 PM, Ilya Dryomov wrote:
> On Tue, Sep 13, 2016 at 12:08 PM, Nikolay Borisov wrote:
>> Hello list,
>>
>>
>> I have the following cluster:
>>
>> ceph status
>> cluster a2fba9c1-4ca2-46d8-8717-a8e42db14bb0
>> health HEALTH_OK
>> monmap e2: 5 mons
On Tue, Sep 13, 2016 at 12:08 PM, Nikolay Borisov wrote:
> Hello list,
>
>
> I have the following cluster:
>
> ceph status
> cluster a2fba9c1-4ca2-46d8-8717-a8e42db14bb0
> health HEALTH_OK
> monmap e2: 5 mons at
>
22 matches
Mail list logo