Dear All,
Following a physical disk failure of a RAID1 array, I tried to mount
the remaining volume of a root partition with "-o degraded". For some
reason it ended up as read-only as described here:
https://btrfs.wiki.kernel.org/index.php/Gotchas#raid1_volumes_only_mountable_once_RW_if_degraded
subscribe linux-btrfs
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Dear All,
The file system on a RAID1 Debian server seems corrupted in a major
way, with 99% of the files not found. This was the result of a
precarious shutdown after a crash that was preceded by an accidental
misconfiguration in /etc/fstab; it pointed "/" and "/tmp" to one and
the same UUID by om
On 7 February 2016 at 20:27, Lionel Bouton
wrote:
> Hi,
>
> Le 07/02/2016 14:15, Andreas Hild a écrit :
>> Dear All,
>>
>> The file system on a RAID1 Debian server seems corrupted in a major
>> way, with 99% of the files not found. This was the result of a
>
On 7 February 2016 at 20:56, Qu Wenruo wrote:
>
> You are wondering why data is still 168G, but that's the allocated data
> chunk size.
>
> It means 168G space is allocated to store data, but only 42M is used.
> Matches with your vanilla df output.
>
> So it doesn't mean you data are still here.
On 7 February 2016 at 09:27, Qu Wenruo wrote:
>
>
> On 02/07/2016 10:23 PM, Andreas Hild wrote:
>>
>> On 7 February 2016 at 20:56, Qu Wenruo wrote:
>>>
>>>
>>> You are wondering why data is still 168G, but that's the allocated data
>&g