Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread Chris Laprise

On 5/24/19 10:00 PM, brendan.h...@gmail.com wrote:

Hi folks,

Summary/Questions:

1. Is the extremely large minimum-IO value of 256KB for the dom0 block devices 
representing Q4 VM volume in the thin pool ... intentional?
2. And if so, to what purpose (e.g. performance, etc.)?
3. And if so, has the impact of this value on depending on discards for 
returning unused disk space to the pool been factored in?

---discussion and supporting cmd output follows---

As you can see below, the MIN-IO (minimum IO size for R/W/D) and DISC-GRAN 
(minimum size allowed for discard/trim commands) on most of the thin pool 
volumes are both set to 256KB. Shown below, you can see this is the case for 
the debian 9 VM and the dom0 root volume. Same for all the VM volumes that I 
cut out of the output below for brevity/privacy.

Everything else in the stack, the drives, partitions, luks/crypt container and 
even some of the non-VM filesystem pool volumes and/or metadata have much more 
reasonable MIN-IO and DISC-GRAN values of 512 bytes or 4K...including dom0 swap!

The result is that turning on automatic trimming of the filesystems within VMs 
requires large holes to be created on the virtual disk before triggering 
discards that can be transmitted down the stack during deletions. To rephrase: 
in the default configuration, for data to be recovered from VM volumes back 
into the pool after deletions, the deletions must include files with large 
contiguous sections. Also, this negatively impacts physical disk trimming, if 
the user has configured it.

The 256K value may explain why folks have only found that manually invoking  
'sudo fstrim /av' is the only guaranteed way to trigger full release of storage 
back into the pool from VMs, leaving users who do not regularly trim from 
inside their VMs at risk of the pool running out of room.


Hi Brendan,

It would be interesting if thin-lvm min transfer were the reason for 
this difference in behavior between fstrim and the filesystem.


However, I think you're wrong to assume that any free block at any scale 
should be discarded at the lvm level. This behavior is probably a 
feature designed to prevent pool metadata use from exploding to the 
point where the volume becomes slow or unmanageable. Controlling 
metadata size is a serious issue with COW storage systems and at some 
point compromises must be made between data efficiency and metadata 
efficiency.


On thin-lvm volumes, maxing-out the allocated metadata space can have 
serious consequences including loss of the entire pool. I experienced 
this myself several weeks ago and I was just barely able to manage 
recovery without reinstalling the whole system – it involved deleting 
and re-creating the thin-pool, then restoring all the volumes from backup.


Run the 'lvs' command and look at the Meta% column for pool00. If its 
much more than 50% there is reason for concern, because if you put the 
system through a flurry of activity including cloning/snapshotting 
and/or modifying many small files then that figure could balloon close 
to 100% in a very short period.


--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5c817d50-76a6-069d-16b9-990c893339a4%40posteo.net.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: qubes-template-debian-10

2019-05-25 Thread faggot shitty
четверг, 23 мая 2019 г., 20:52:08 UTC+3 пользователь Dominique St-Pierre 
Boucher написал:
> Good day qubes-users,
> 
> Do you know when a debian 10 template will be available?
> 
> Tried to do the manual upgrade and keep running into issues.
> 
> Thanks
> 
> Dominique

you can use debian 11 already.
rename repo && update && profit

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/849642a9-3ebc-4cfb-8fd6-b00312d7f041%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] AEM pcr sanity check failed

2019-05-25 Thread Patrik Hagara
On 5/26/19 2:32 AM, Patrik Hagara wrote:
> as the one in Fedora repos is quite
> ancient with multiple - even security - bugs

Actually, scratch that. You can get an up-to-date Fedora tboot package
(newer than Ubuntu!) from a more recent release [1], manually verify the
signatures [2] and install that.


[1] https://rpmfind.net/linux/rpm2html/search.php?query=tboot
[2] https://getfedora.org/security/

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a4040d1e-72c5-4ad5-499b-35a8b5ac80ae%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread unman
On Fri, May 24, 2019 at 08:03:22PM -0700, brendan.h...@gmail.com wrote:
> Looks like the chunksize of the pool is the controlling factor (256kb) here.
> 
> % lvs -o name,chunksize|grep pool
> 
> Docs say the default value is 64kb (that’s also the minimum for a thin pool). 
> Not sure why qubesos value is higher.
> 

Docs also say that where a thin pool is used primarily for thin
provisioning a larger value is optional.

This isnt a Qubes choice - it's Fedora, (and, I think, dependent on the
size of the pool.)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20190526005055.len5hluekkns4c75%40thirdeyesecurity.org.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] WARNING: don't update qubes, will break your install

2019-05-25 Thread drokmed
I did a full backup a week ago.  I think I'll do another fresh install, then 
restore it to last week.

I lost a week's worth of work.  That's what I get for slacking on backups.

I'll have to hold off on ANY updates, until I hear something on this issue.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ca2917a1-28ea-4cb5-9528-ca4462daf4ca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread Chris Laprise

On 5/25/19 12:45 PM, Brendan Hoar wrote:


On Sat, May 25, 2019 at 12:09 PM Chris Laprise > wrote:



It would be interesting if thin-lvm min transfer were the reason for
this difference in behavior between fstrim and the filesystem.


Indeed. Pretty sure that is the case for some workloads.

However, I think you're wrong to assume that any free block at any
scale
should be discarded at the lvm level. This behavior is probably a
feature designed to prevent pool metadata use from exploding to the
point where the volume becomes slow or unmanageable. Controlling
metadata size is a serious issue with COW storage systems and at some
point compromises must be made between data efficiency and metadata
efficiency.


Agreed. I started with that assumption but as I read through the docs I 
realized there was some performance-related balancing going on.


On thin-lvm volumes, maxing-out the allocated metadata space can have
serious consequences including loss of the entire pool. I experienced
this myself several weeks ago and I was just barely able to manage
recovery without reinstalling the whole system – it involved deleting
and re-creating the thin-pool, then restoring all the volumes from
backup.


Ouch!

I’m going to add an Issue/Feature request to add metadata store 
monitoring and alerts to the disk space widget. :)


—-

I will note that the docs indicate that lvcreate uses the pool 
allocation size divided by the chunk size times a multiplier to 
determine the default metadata store size (assuming you don’t override 
the final value). So if you specify the chunk size the “default” 
metadata store is *supposed* to scale...


One can also specify a safer (larger) metadata store during lvcreate at 
the expense of file storage of course.


Based on my experience (two metadata meltdowns since moving to Qubes 4) 
I would open another issue to have Qubes double or triple the system's 
default metadata size after installation. Proportionally, the loss of 
data space is small and its easy to implement using 'lvresize 
--poolmetadatasize'.




I ran across a discussion of chunk size guidance and one thing I’ll note 
is that for heavy COW workloads the recommendation was to keep the chunk 
size value at the low end but be sure to increase the metadata store 
size. I’ll see if I can find it in my browser history.


Run the 'lvs' command and look at the Meta% column for pool00. If its
much more than 50% there is reason for concern, because if you put the
system through a flurry of activity including cloning/snapshotting
and/or modifying many small files then that figure could balloon close
to 100% in a very short period.


Will do!

In the end I am just puzzled why the default chunk is 256k and not 64k, 
though. I haven’t found a place in the qubes installer iso source where 
the size is overriden.


64k is the minimum but this increases when the pool size reaches certain 
thresholds. On my system, its 128k. As for Redhat switching to such a 
large (2MB) minimum size, I think it should be regarded as throwing up 
one's hands and giving up on the subject. IMO, its too large and 
shouldn't be used.


FWIW, Redhat's new COW storage system is a frankenstein patchwork using 
xfs volumes like some kind of block layer. It looks about as elegant and 
comprehensible as their other gift to the world, systemd. They need to 
hire better engineers.


I think the only _good_ way to deal with COW metadata expansion, since 
its always related to data fragmentation, is to keep expanding it and 
let system performance degrade accordingly. This simply makes 
de-fragmentation maintenance issue (defrag to shrink metadata and get 
performance back). This is what Microsoft did with NTFS and it was the 
right choice; clinging to fixed metadata sizes is merely a state of 
denial that leads to peoples' disks suddenly becoming unusable.




I also ran across docs from red hat saying the the 7.4 to 7.5 rhel 
transition moved from a default of 64KB to 2MB (possibly due to 
upstream?)...so discard on delete’s usefulness inside VMs may be even 
more constrained in the future if I read that right.


Its a good bet that "upstream" in this case is Redhat.



I’ll probably open a feature ticket asking for auto fstrim of the 
mounted rw filesystems on templates/templated VM shutdowns. As it is, I 
already do this manually on templates after every update and from time 
to time in VMs that see a lot of file churn.


--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.

Re: [qubes-users] AEM pcr sanity check failed

2019-05-25 Thread Patrik Hagara
On 5/22/19 1:56 PM, m...@peterihasz.com wrote:
> I renamed the sinit module to the right one. I think it is loaded but the 
> system continously reboots. 
> Here are the logs:
> https://photos.app.goo.gl/MtYNRYf8uduy1Ukn8

Try appending "min_ram=0x200" [1] to tboot params.

If it's still stuck in a reboot loop, try a newer tboot version. Either
use the binary from Ubuntu package (as the one in Fedora repos is quite
ancient with multiple - even security - bugs) or compile [2] it yourself.

Sadly, tboot does not provide signed release tarballs. :(

[1] https://github.com/QubesOS/qubes-issues/issues/2155
[2] https://sourceforge.net/projects/tboot/


Cheers,
Patrik

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/46092cdb-27c2-e100-8540-ae9ab086b05b%40gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] WARNING: don't update qubes, will break your install

2019-05-25 Thread drokmed
I just noticed on the Fedora forums they released Fedora 30 today.

Qubes runs Fedora 29, so I don't know if today's updates had anything to do 
with f30, but it might be related.  Checking to see if anyone else on fedora 
forums has similar issue as mine.  

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1bae6c1e-4835-4187-a192-f8cc53fee1b4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread Brendan Hoar
On Sat, May 25, 2019 at 12:09 PM Chris Laprise  wrote:

>
> It would be interesting if thin-lvm min transfer were the reason for
> this difference in behavior between fstrim and the filesystem.


Indeed. Pretty sure that is the case for some workloads.

However, I think you're wrong to assume that any free block at any scale
> should be discarded at the lvm level. This behavior is probably a
> feature designed to prevent pool metadata use from exploding to the
> point where the volume becomes slow or unmanageable. Controlling
> metadata size is a serious issue with COW storage systems and at some
> point compromises must be made between data efficiency and metadata
> efficiency.


Agreed. I started with that assumption but as I read through the docs I
realized there was some performance-related balancing going on.

On thin-lvm volumes, maxing-out the allocated metadata space can have
> serious consequences including loss of the entire pool. I experienced
> this myself several weeks ago and I was just barely able to manage
> recovery without reinstalling the whole system – it involved deleting
> and re-creating the thin-pool, then restoring all the volumes from backup.


Ouch!

I’m going to add an Issue/Feature request to add metadata store monitoring
and alerts to the disk space widget. :)

—-

I will note that the docs indicate that lvcreate uses the pool allocation
size divided by the chunk size times a multiplier to determine the default
metadata store size (assuming you don’t override the final value). So if
you specify the chunk size the “default” metadata store is *supposed* to
scale...

One can also specify a safer (larger) metadata store during lvcreate at the
expense of file storage of course.

I ran across a discussion of chunk size guidance and one thing I’ll note is
that for heavy COW workloads the recommendation was to keep the chunk size
value at the low end but be sure to increase the metadata store size. I’ll
see if I can find it in my browser history.

Run the 'lvs' command and look at the Meta% column for pool00. If its
> much more than 50% there is reason for concern, because if you put the
> system through a flurry of activity including cloning/snapshotting
> and/or modifying many small files then that figure could balloon close
> to 100% in a very short period.


Will do!

In the end I am just puzzled why the default chunk is 256k and not 64k,
though. I haven’t found a place in the qubes installer iso source where the
size is overriden.

I also ran across docs from red hat saying the the 7.4 to 7.5 rhel
transition moved from a default of 64KB to 2MB (possibly due to
upstream?)...so discard on delete’s usefulness inside VMs may be even more
constrained in the future if I read that right.

I’ll probably open a feature ticket asking for auto fstrim of the mounted
rw filesystems on templates/templated VM shutdowns. As it is, I already do
this manually on templates after every update and from time to time in VMs
that see a lot of file churn.

Brendan

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAOajFefT_hxqPqK5NF0W2pSQPmNxQWKtQq0UN3mAPLb73YeU%2Bg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] WARNING: don't update qubes, will break your install

2019-05-25 Thread d
On Saturday, May 25, 2019 at 6:05:29 PM UTC-7, unman wrote:
> On Sat, May 25, 2019 at 01:55:43PM -0700, drok...@gmail.com wrote:
> > I just finished a fresh install of Qubes, then updated dom0, fedora 
> > template, debian template, whonix gw and ws templates, as per instructions 
> > on
> > 
> > https://www.qubes-os.org/doc/installation-guide/
> > 
> > Shutdown, boot.
> > 
> > Qubes won't let me login.  Same problem as in my other post.
> > 
> 
> Nothing wrong with updates on my systems.
> What's your hardware?
> Did you reboot and log in successfully before updating the system?

Hi unman, thanks for responding.

Yes, I did a fresh install, went fine.  Rebooted fine.  Updated dom0 and all 
templates fine.  After restart, something is broken.  I guess my hardware 
doesn't like the latest update.

ASUS G73Jh 213 laptop
Q740@1.73GHz i7 quadcore
AMD Mobility Radeon HD 5870

It's an older laptop, bought it in 2010.  BIOS updated to final before they 
stopped supporting.  Too bad, it runs Qubes great.

It gets stuck in a loop, right before login screen, keeps switching from text 
mode to GUI and back.  Is there a way to force into console mode on qubes?  
That could be useful.

Otherwise, I can do a fresh install again, but not a good idea to run the 
shipping 4.01 without updating templates, especially the Debian, which I do use.

I'm open to suggestions.  Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ad85a976-2f58-4698-9ac6-0ecf08c24781%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Qubes Server Formulas

2019-05-25 Thread Frédéric Pierret
Dear Qubes community,

Just to inform you that I'm currently working on Qubes Server Formulas
(Salt) for providing examples of server configurations. It intends to
provide sufficient built-in Qubes materials for bringing Qubes to the
edge of server environments. You can track this work here:
https://github.com/QubesOS/qubes-issues/issues/5051. I succeeded to
create some working topologies but there is still some adjustments to do
in some Qubes components.

If you have any question on this work, please use the mailing list
instead of the issue itself.

Best regards,

Frédéric Pierret


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/f28aa04d-626e-36ff-790a-a9002ff9785b%40qubes-os.org.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


[qubes-users] WARNING: don't update qubes, will break your install

2019-05-25 Thread drokmed
I just finished a fresh install of Qubes, then updated dom0, fedora template, 
debian template, whonix gw and ws templates, as per instructions on

https://www.qubes-os.org/doc/installation-guide/

Shutdown, boot.

Qubes won't let me login.  Same problem as in my other post.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/8c2017ca-f1b0-40e9-826e-9865063d68f2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] WARNING: don't update qubes, will break your install

2019-05-25 Thread drokmed
Yes, that worked.  I'm back to a fresh install.

I didn't know we could do that.  Damn, I was a little hasty formatting my 
drive.  That's okay, I didn't lose much.

If I try upgrading again, it will break again.  Maybe whatever fedora broke 
will get fixed in the coming weeks.  I don't like the idea of running without 
updates.

I can restore backups to last week, then just wait and see if any fixes come 
out.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/60199ce8-1c19-4b92-bfa4-6088e6be90c9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: [qubes-devel] Qubes Server Formulas

2019-05-25 Thread Scarpafo Scarpafo
Yeah!

Let's go server!!

QubesOS rules the world!

Le sam. 25 mai 2019 à 19:21, Frédéric Pierret 
a écrit :

> Dear Qubes community,
>
> Just to inform you that I'm currently working on Qubes Server Formulas
> (Salt) for providing examples of server configurations. It intends to
> provide sufficient built-in Qubes materials for bringing Qubes to the
> edge of server environments. You can track this work here:
> https://github.com/QubesOS/qubes-issues/issues/5051. I succeeded to
> create some working topologies but there is still some adjustments to do
> in some Qubes components.
>
> If you have any question on this work, please use the mailing list
> instead of the issue itself.
>
> Best regards,
>
> Frédéric Pierret
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "qubes-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to qubes-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to qubes-de...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/qubes-devel/f28aa04d-626e-36ff-790a-a9002ff9785b%40qubes-os.org
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CALwLNsk%2BSNgHQfpHU88VSk2YkFg%3DRvjOobzB_LmkUR2%2BQWdhSA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] WARNING: don't update qubes, will break your install

2019-05-25 Thread unman
On Sat, May 25, 2019 at 06:22:11PM -0700, d wrote:
> On Saturday, May 25, 2019 at 6:05:29 PM UTC-7, unman wrote:
> > On Sat, May 25, 2019 at 01:55:43PM -0700, drok...@gmail.com wrote:
> > > I just finished a fresh install of Qubes, then updated dom0, fedora 
> > > template, debian template, whonix gw and ws templates, as per 
> > > instructions on
> > > 
> > > https://www.qubes-os.org/doc/installation-guide/
> > > 
> > > Shutdown, boot.
> > > 
> > > Qubes won't let me login.  Same problem as in my other post.
> > > 
> > 
> > Nothing wrong with updates on my systems.
> > What's your hardware?
> > Did you reboot and log in successfully before updating the system?
> 
> Hi unman, thanks for responding.
> 
> Yes, I did a fresh install, went fine.  Rebooted fine.  Updated dom0 and all 
> templates fine.  After restart, something is broken.  I guess my hardware 
> doesn't like the latest update.
> 
> ASUS G73Jh 213 laptop
> Q740@1.73GHz i7 quadcore
> AMD Mobility Radeon HD 5870
> 
> It's an older laptop, bought it in 2010.  BIOS updated to final before they 
> stopped supporting.  Too bad, it runs Qubes great.
> 
> It gets stuck in a loop, right before login screen, keeps switching from text 
> mode to GUI and back.  Is there a way to force into console mode on qubes?  
> That could be useful.
> 
> Otherwise, I can do a fresh install again, but not a good idea to run the 
> shipping 4.01 without updating templates, especially the Debian, which I do 
> use.
> 
> I'm open to suggestions.  Thanks.
> 

Are you able to boot in to QUbes using an older Xen/kernel combination?
You should be able to select this at the boot menu.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20190526015700.xz5lefcnz6y4zzdd%40thirdeyesecurity.org.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Q4.0 - LVM Thin Pool volumes - lsblk returns very large (256kb) MIN-IO and DISC-GRAN values

2019-05-25 Thread 'awokd' via qubes-users

Brendan Hoar wrote on 5/25/19 4:45 PM:

On Sat, May 25, 2019 at 12:09 PM Chris Laprise  wrote:



I’m going to add an Issue/Feature request to add metadata store monitoring
and alerts to the disk space widget. :)


I had the same thought reading Chris' email.

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/06f03dbd-a17f-d424-35fb-832d22cc7527%40danwin1210.me.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] X230 vs Purism - real world attack probability

2019-05-25 Thread taii...@gmx.com
On 05/21/2019 09:52 AM, scurge1tl wrote:
> I have a question related to the decision about what laptop is the
> better option for Qubes usage, from the security point of view, in the
> real world.
> 
> The question is related to the IME on Intel, PSP on AMD and other
> Hardware holes. I took these laptop examples to sample the differences
> somehow.
> 
> Pose the non-existent micro controllers updates, like in case of X230
> with IME disabled and corebooted, which doesn't but get these updates
> anymore, 

What updates? who told you that? What micro controllers?

> higher risk than only partial disabling of the IME by Purism
> which still but gets the micro controllers updates? Or is it a vice
> versa?
> 
> If I would like to have a strong security position, in case of the
> laptop Hardware with Qubes, and would decide in between the two, which
> variant will be more prone to the real world attacks? What attack
> vectors are available in both cases? For example, is one of the cases
> more resistant to the remote exploitation. Is one of the options
> forcing an attacker more to execute an attack with physical access
> than the other option?
> 

pur.company is junk, they are an incredibly dishonest company that sells
"coreboot open firmware librem" machines that have a hw init process
that is entirely performed via the Intel FSP binary blob.

The x230 is far more free than anything pur.company could sell you,
freeing intel fsp won't happen due to how difficult it would be without
documentation and how long it would take and it is both impossible and
illegal to free Intel ME.

Illegal? Yes - ME/PSP is a DRM mechanism and bypassing them is illegal
in the usa where they are based.

But since the 230 still has an ME abit more nerfed than the purijunk you
should get a G505S which has no ME/PSP and is the most free laptop option.

Pur.junk = me kernel+init code run (not disabled), HW init 100% blobbed
- performed via Intel FSP
X230 = me init code runs (not disabled), HW init is open source
G505S = No ME/PSP, CPU/RAM hw init is open source, graphics/power mgmt
requires blob but IOMMU prevents them from messing with stuff. - the
most free

pur.company lies by claiming their ME is "disabled" when the kernel and
init code still run.


I don't want to say their name as they send someone out of the woodwork
to defend them and waste my time every time someone mentions them in a
negative light they go and start claiming that they are "doing their
best" - whereas various other much newer companies are actually selling
owner controlled libre firmware trustworthy general computing hardware
proving their claims of "doing our best" to be bullshit.

If you want more info see my other posts as I have made many of them re:
pur.company or laptop/desktop/workstation selections.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/cbcead23-63af-c5b7-26c5-99ba40047341%40gmx.com.
For more options, visit https://groups.google.com/d/optout.


0xDF372A17.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: [qubes-users] WARNING: don't update qubes, will break your install

2019-05-25 Thread unman
On Sat, May 25, 2019 at 01:55:43PM -0700, drok...@gmail.com wrote:
> I just finished a fresh install of Qubes, then updated dom0, fedora template, 
> debian template, whonix gw and ws templates, as per instructions on
> 
> https://www.qubes-os.org/doc/installation-guide/
> 
> Shutdown, boot.
> 
> Qubes won't let me login.  Same problem as in my other post.
> 

Nothing wrong with updates on my systems.
What's your hardware?
Did you reboot and log in successfully before updating the system?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20190526010527.2r6xlpp2y64npfqv%40thirdeyesecurity.org.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] qubes updater broken? just had to re-install qubes

2019-05-25 Thread drokmed
Don't know if anyone else had this problem yet today.

Booted my computer this morning, Qubes 4 fully updated, been using for months, 
apply updates every day.

Today, I noticed lots of updates.  Dom0 had both kernel and Xen updates, so did 
all the other qubes.

I let the normal Qubes Updater do it's thing, updating each cube.  It completed 
fine, or so I thought.  I noticed that several qubes didn't shutdown quite 
right, click on the right Q button, they were still there, but showed zero 
memory usage.  I tried shutting them down manually, but nothing happened.  
Qubes Manager reported they were shut down.

I decided to do a normal shutdown, which went fine.

When it booted back up, enter disk password, seems to boot up normally, as soon 
as gets to screen for user login, screen flashes back and forth between text 
install mode and gui mode.  Kept cycling, and hard disk light blinks each time. 
 Let it run for a while, hoping it would change, but no luck.  Did a 
ctrl-alt-del and it does proper shutdown/reboot, but comes back up with same 
problem.

Reformatting it now.  Good thing I did recent backup to USB stick.

Maybe it was just my bad luck, lets hope so.  Today, I re-learned the value of 
backups.

If you are reading this... backup recently?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/846a68a3-f345-4816-b6fa-7b6eb598e0be%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: qubes-template-debian-10

2019-05-25 Thread unman
On Sat, May 25, 2019 at 12:02:18AM -0700, faggot shitty wrote:
> четверг, 23 мая 2019 г., 20:52:08 UTC+3 пользователь Dominique St-Pierre 
> Boucher написал:
> > Good day qubes-users,
> > 
> > Do you know when a debian 10 template will be available?
> > 
> > Tried to do the manual upgrade and keep running into issues.
> > 
> > Thanks
> > 
> > Dominique
> 
> you can use debian 11 already.
> rename repo && update && profit
> 

debian-11? You're mistaken there.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20190526004606.wcgs7ltg6htuaqtz%40thirdeyesecurity.org.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] qubes backup and restore question

2019-05-25 Thread drokmed
What is the point of backing up a qube, if you can't restore to it?

I'm restoring a full backup, and instead of restoring to qubes, it's creating 
new ones with different names.  WTF!

What the hell am I supposed to do now?  Delete the original qubes, and rename 
the restores?  I'm guessing that will break a bunch of links, which I'll have 
to manually fix.  Am I the only person that thinks this is totally stupid?

And it says the restored dom0 files are located in some other directory.  What 
the hell am I supposed to do with that?  Dom0 can't be restored?  Are you 
kidding me?

It's a good thing I gave up drinking when my brother died.  I definitely need a 
drink right about now.  This is maddening.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/9655a55f-55cb-4def-ba74-4dfabc284fd3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.