Okay, just to make a reference in the mailing list I'm replying w/
extra details found by quickly digging the net. Hope it will be useful
to someone.

>> Use btrfs instead of LVM. That way you can do subvolumes.
>> Is btrfs is stable enough?
> I've reviewed some user notes on btrfs and found it has some disadvantages:
>
> 1. fsck is not aware of btrfs and occasional run by hands of fsck on
> btrfs may lead to partition corruption.
> 2. It stores some extra info for each file, so volumes w/ a lot of
> smal files appear to use more space than on ext3/ext4
seems not very actual - vms are usually fat enough )
 .
> 3. tools for btrfs are still in active development, so I should be
> careful to never appear to be using an alfa- or betta- grade versions
> if I
> want to be sure all is sage enough. I even consider for myself ext4 is
> not stable enough compared to ext3. btrfs is one year younger than
> ext4
Also I wonder is this is still actual in some way after 6 years of
development: https://lkml.org/lkml/2010/6/18/144 ? ?-)

> BTW.
> If I enable Qubes development repositories - how can I ensure that
> btrfs tools never get newer than I want?
> BTW - to the moment I don't understand - is it possible to tune btrfs
> in such a way, when it will use most oftenly writed meta data stuff
> via  hdd only?
> I.e /dev/sda is hdd , sdb is ssd - is it possible to create a btrfs
> partition that will put its metadata on ssd
Found answer on this:

mkfs.btrfs -m single /dev/sdc -d raid0 /dev/sdb /dev/sdd

metadata will be created on sdc only , data in raid 0 on sdb and sdd.

Though w/ that to avoid 'btrfs device scan' on each start we have to
specify device list in fstab like that:
/dev/sdb     /mnt    btrfs
device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,device=/dev/sde
that seem to be not a problem.

I guess it's okay to use partitions, not entire disk drives and make
combination of partitions w/ diffrent stability level, I.e. use raid1
for both data and metadata with partitions used to store gpg appVMs
and
less safe configuration for other app/template VMs :)

Also it looks that raid0 configuration is some times dangerous:
"It is not possible to use a volume in degraded mode if raid0 has been
used for data/metadata and the device had not been properly removed
with btrfs device delete"
" The situation is even worse if RAID0 is used for the the metadata:
trying to mount a BTRFS volume in read/write mode while not all the
devices are accessible will
simply kill the remaining metadata, hence making the BTRFS volume
totally unusable."
"The situation is no better if you have used RAID1 for the metadata
and RAID0 for the data, you can mount the drive in degraded mode but
you will encounter problems while accessing your files"

the most detailed practice howto I've found is there:
http://www.funtoo.org/BTRFS_Fun

So looks like a good choice. Thanks for advice. :)


>> Separation below is due to faster reads from ssd . Also ssd drives degrade
>> in terms of stability if writes are made too often . Thus I want separate
>> app VM /template VM storage - template VMs are changed rarely. BTW - I'm not
>> sure where temporary images are stored when Qubes starts an App VM. The idea
>> is get most reads from ssd and most writes to hdd.
> This is noted here:
> https://groups.google.com/forum/#!topic/qubes-devel/hG93VcwWtRY and
> (today/yesterday) in reply to similar quesion around Subj in
> qubes-developers:
> https://groups.google.com/forum/#!topic/qubes-devel/wfqKiOYgV8Y
Also there's a howto on adding ssd cache to Qubes here:
https://groups.google.com/forum/#!msg/qubes-users/ArHTEeQAH8A/r9zzY0DLBQAJ
(breaks AntiEvilMade) - this seem to utilize
lvm caching abilities.

I'm not sure though what is better, but from my point of view there're
3 options:

1  classic LVM
1.1) use classic lvm setup w/o ssd cache, when encryption is done on
physical volume level - shouldn't 've interference w/ anti-evil-maid
1,2) same as 1, but use encryption on logical volume level
which is better?

2) use lvm w/ ssd cache - breaks current implementation of anti evil
made, faster reads, but keeping ssd in healthy state by as much as
possible avoiding writes to it seem to be questionable for me.
3) use lvm physical volumes as encryption provider and btrfs for
everything upper than physical volume and tune it to write metadata to
hdd only .
4) avoid lvm, use luks on physical partitions like /dev/sda5 /dev/sdb4
and use them as an encription provider for btrfs as in 3.
5) avoid luks use older methods to provide physical device enrcryption
for btrfs (cryptoloop/loop-AES/...).

As noted in https://help.ubuntu.com/community/EncryptedFilesystemHowto7:
The old cryptoloop / bare dm_crypt scheme provides exactly the same
encryption algorithms and security level. The only difference is that
LUKS is easier to manage, and allows multiple access keys per
partition.

With the old format, no configuration header is embedded in an
encrypted partition. It does not even store a trace of its single key
on the disk ... Unless you're turned off by the
single-unchangeable-key restriction, you should use the old format.
The above is somewhat ubuntu specific, but paper has enough for better
understanding.

And If I understand things right, the most tunable alternative is 3 or
4 as 1 require some patches to be applied (to have fully read only
Dom0 (see https://groups.google.com/forum/#!topic/qubes-devel/wfqKiOYgV8Y)).
Looks like 4) should be faster than 3) and since btrfs has its own
implementation for most (or everything?) lvm has - this should be
preferable. And 5 is probably not supported by Qubes boot scripts (I'm
not sure about this), but
should be faster as luks gives little overhead (never compared myself,
but a friend of mine told that he has better operations when
encryption w/o luks). Also when you don't use luks - it seems better
fitting into plausible deniability - looks like no
trace of encryption is on the disk - no luks standard container is
visible on the raw disk read. More reading here:
https://help.ubuntu.com/community/EncryptedFilesystemHowto

Russian speaking users could look at
http://www.bog.pp.ru/work/LUKS.html for review on luks (+ there're
links to English papers from there).


>> On February 18, 2017 8:21:10 AM PST, Oleg Artemiev <grey.o...@gmail.com>
>> wrote:
>>>
>>> Hello,
>>>
>>> I'm about to upgrade from Qubes 3.0 to Qubes 3.2 now.
>>>
>>> I've two terabytes (1 ssd and 1 hdd) in my laptop and 16Gigs of
>>> memory. Is separation to different mount points as proposed below
>>> is a good idea? Please note if you think that something could also be
>>> moved to ssd. My criteria for ssd stuff is "oftenly read, very rare
>>> write".
>>> As everything is encrypted , thus no need in gpt - dos partition table.
>>>
>>> ssd:
>>> /   - 400Мb
>>> /usr - 5G0b
>>> /boot - 300Mb
>>> /var/lib/qubes/vm-templates - 350Gb
>>> /var/lib/qubes/vm-kernels   - 3.5Gb
>>> /var/lib/rpm                - 100Mb
>>> /var/lib/yum                - 50Mb
>>>
>>> individual catalogues under /home/<myuser>/ - up to 100 mount points ,
>>> unsure which ones are rewritten rarely an thus worth moving to ssd,
>>> thus will move after upgrade.
>>>
>>> hdd:
>>> /a_copy_of_/boot         - 300Mb
>>> /tmp                     - 32Gb - looks like it has to be not less
>>> then biggest VM size
>>> swap                     - 32Gb
>>> /home - 100Мб
>>> /var/log - 300Mb
>>> /var/log/
>>>
>>> BTW: Looks like LVM thin provisioning gives at least two times slower
>>> writes, so I'm about to use usual LVM.

-- 
Bye.Olli.
gpg --search-keys grey_olli , use key w/ fingerprint below:
Key fingerprint = 9901 6808 768C 8B89 544C  9BE0 49F9 5A46 2B98 147E
Blog keys (the blog is mostly in Russian): http://grey-olli.livejournal.com/tag/

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CABunX6OwoK0XPTgZP-j1aF3nyUfHMKN8k4nQYkTt%3DS%2Bs%3DhHRZw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to