Re: Antw: [EXT] [qubes-users] probable lvm thin_pool exhaustion

2020-03-10 Thread maiski



Quoting brendan.h...@gmail.com:


On Wednesday, March 11, 2020 at 1:34:17 AM UTC, maiski wrote:



Quoting brend...@gmail.com :
>
> Qubes 4.1 (in development) has added a warning (in addition to the
current
> lvm space usage warning) for lvm metadata usage above a threshold. 4.0
> doesn't have the metadata nearing full warning, and that's what tends to
> cause these types of thinpool issues.
>
> In addition to the warning, Qubes 4.1 is also doubling (vs. the lvm
default
> value) the amount of space set aside for lvm thinpool metadata which
will
> substantially reduce the chances of ever hitting this issue under 4.1.
>
> Brendan
>
> PS - above is not helpful for recovering this machine, of course.
However,
> recovery from this can be very difficult and even after recovery not
> guaranteed to recover all the data. The Qubes devs are aware of this and
> very much want to avoid these issues in the next release.

Hm, yes, this does not help:/
What about running fstrim on the ssd and try booting again?
@brendan: I've seen that you had some thoughts about lvm in some postings,
so would you care to elaborate/brainstorm on the situation i
described, you know, every input is valuable right now :)




 TBH, I wouldn't know what to do. Ran into a similar problem with 4.0 a
long while back and just reinstalled because it seemed insurmountable at
the time.

I've been reducing my main pool usage and manually monitoring the metadata
to avoid the situation with my current install, waiting for 4.1 to become
stable before moving to it.

Chris Laprise (tasket) would be a better resource, if he's willing to jump
in.

Brendan



I remember also running into a similar issue wy back, I adjusted a  
param in grub/xen.cfg, i do not remember, and told lvm to surpass its  
set threshold for maximum filled pool size so it can boot, but yeah,  
this is not the issue here... Nonetheless thank you for the quick  
answer!



--
You received this message because you are subscribed to the Google  
Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit  
https://groups.google.com/d/msgid/qubes-users/0c13f771-7bdc-4246-8459-216cb5dabbe2%40googlegroups.com.




--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200311025052.Horde._dcmO536WBsdlTdxZTTeLw8%40webmail.df.eu.


Re: Antw: [EXT] [qubes-users] probable lvm thin_pool exhaustion

2020-03-10 Thread maiski



Quoting brendan.h...@gmail.com:


Qubes 4.1 (in development) has added a warning (in addition to the current
lvm space usage warning) for lvm metadata usage above a threshold. 4.0
doesn't have the metadata nearing full warning, and that's what tends to
cause these types of thinpool issues.

In addition to the warning, Qubes 4.1 is also doubling (vs. the lvm default
value) the amount of space set aside for lvm thinpool metadata which will
substantially reduce the chances of ever hitting this issue under 4.1.

Brendan

PS - above is not helpful for recovering this machine, of course. However,
recovery from this can be very difficult and even after recovery not
guaranteed to recover all the data. The Qubes devs are aware of this and
very much want to avoid these issues in the next release.


Hm, yes, this does not help:/
What about running fstrim on the ssd and try booting again?
@brendan: I've seen that you had some thoughts about lvm in some postings,
so would you care to elaborate/brainstorm on the situation i  
described, you know, every input is valuable right now :)


--
You received this message because you are subscribed to the Google  
Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit  
https://groups.google.com/d/msgid/qubes-users/54c72ea0-be0b-4414-bc01-7fee409b7c73%40googlegroups.com.




--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200311023413.Horde.IOwm_GtrdCIkVGTGfYLkNA1%40webmail.df.eu.


Re: Antw: [EXT] [qubes-users] probable lvm thin_pool exhaustion

2020-03-10 Thread maiski



Quoting Ulrich Windl :


For some reason I have a "watch -n30 lvs" running in a big terminal.  
On one of the op lines I see the usage of the thin pool. Of course  
this only helps before the problem...


But I thought some app is monitoring the VG; wasn't there some space  
warning before the actual problem?





Of course there was. But atm of failure there was none visible, which  
does not excuse that beforehand I had created 3 new and downloaded a  
minimal template for fun, so... why can it be simple, when it can be  
complicated




--
You received this message because you are subscribed to the Google  
Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit  
https://groups.google.com/d/msgid/qubes-users/5E67408E02A100037B31%40gwsmtp.uni-regensburg.de.




--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200310124954.Horde.7R4L5nij6QnELir0hWnkNw6%40webmail.df.eu.


[qubes-users] probable lvm thin_pool exhaustion

2020-03-09 Thread maiski

Hello folks,

I have Qubes 4.0 Release standard luks + lvm thin pool.
After a sudden reboot and entering the encryption pass, the dracut
emergency shell comes up.
"Check for pool qubes-dom/pool00 failed (status:1). Manual repair
required!"
The only aclive lv is qubes_dom0/swap.
All the others are inactive.

step 1:
from https://github.com/QubesOS/qubes-issues/issues/5160
/lvm vgscan vgchange -ay
lvm lvconvert --repair qubes_dom0/pool00/
Result:
/using default stripesize 64.00 KiB.
Terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 7212 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed
(status:1). Manual repair required!/

step 2:
since i suspect that my lvm is full (though it does mark 15 g as free)
i tried the following changes in the /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize:
465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to extend
beyond the 15 G marked as free, since idk)
auto_activation_volume_list = to hold the group, root, pool00, swap and a
vm that would like to delete to free some space
volume_list = the same as auto_activation_volume_list

and tried step 1 again, did not work, got the same result as above with
qubes_swap as active only

step 3
tried /lvextend -L+1G qubes_dom0/pool00_tmeta/
Result:
/metadata reference count differ for block xx, expected 0, but got 1
...
Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!/

Since I do not know my way around lvm, what do you think, would be the best
way out of this?
Adding another external PV? migrating to a bigger PV?
I did not play with backup or achive out of fear to loose any unbackuped
data which happens to be a bit :|

Thanks in advance,
m

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20200309221408.Horde.6suQ5c39eHZROYAnW9JpDw1%40webmail.df.eu.


Re: [qubes-users] meetup

2018-04-16 Thread maiski

cool!
no I was not aware! cu!

Quoting Michael Carbone :


On 04/16/18 16:08, mai...@maiski.net wrote:

Hello guys,

I am a qubes user for nearly three years now and would love to meet
other people, discuss, learn from each other, contribute...
That is why i would like to organize a meetup in the city where i am
currently residing - Berlin.
For starters (first meetup) i think a cafe would be fine. And then
decide where and how often to meet, what do we want to do in particular
etc.

Here is a small duddle poll. I think till the end of this week there is
enough time to get an idea if there are enough people wanting to come:

https://dudle.inf.tu-dresden.de/would_I_like_to_participate_in_a_Berlin_Qubes_meetup/


greets,

m



hey, you may have missed previous emails about it but there is a monthly
meeting in Berlin for Qubes users at a local hackerspace. The most
recent meet-up was today.

You can find out more info here:

https://qubesusersberlin.github.io

There is also a mailing list that you can join to be notified about
meetings or otherwise talk.

Hope to see you at the next meet-up,
Michael




--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20180416222426.Horde.WlppiEO5vSOBlF50YDHoNw1%40webmail.df.eu.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] meetup

2018-04-16 Thread maiski

Hello guys,

I am a qubes user for nearly three years now and would love to meet  
other people, discuss, learn from each other, contribute...
That is why i would like to organize a meetup in the city where i am  
currently residing - Berlin.
For starters (first meetup) i think a cafe would be fine. And then  
decide where and how often to meet, what do we want to do in  
particular etc.


Here is a small duddle poll. I think till the end of this week there  
is enough time to get an idea if there are enough people wanting to  
come:


https://dudle.inf.tu-dresden.de/would_I_like_to_participate_in_a_Berlin_Qubes_meetup/

greets,

m

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20180416220851.Horde.bKbtB6TyTIPLcd64UlJ9xQ9%40webmail.df.eu.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] kernel panic after upgrade

2018-03-05 Thread maiski

yes this was a typo.
it was simple as running grub2-mkconfig again to fix the issue
but thanks for the answer

Quoting awokd :


On Fri, March 2, 2018 8:22 am, mai...@maiski.net wrote:

Hello,


Unfortunately after the last update of Qubes 4.0 I have a kernel
panic: "unable to mount root fa on unknown block" and would appreciate
if somebody here could give me a tip.


I think there is a typo here, try searching on "unable to mount root fs on
unknown block" instead. Looks like there are several possible causes,
especially if you are dual/multi-booting.




--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20180305181649.Horde.6UsSzEFInQAgJMbfZUfkNg1%40webmail.df.eu.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] kernel panic after upgrade

2018-03-02 Thread maiski

Hello,

Unfortunately after the last update of Qubes 4.0 I have a kernel  
panic: "unable to mount root fa on unknown block" and would appreciate  
if somebody here could give me a tip.


I installed Qubes 4.0-RC1 and since then been only updating.

After the next-to-last update I was not able to boot xen 4.8.3 and  
linux 4.14.13-3.
With the following configuration: xen 4.8 and linux 4.9.56-21 there is  
no problem.

My machine is Lenovo T470S.

Does anyone have an idea?

m


--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20180302092214.Horde.FjwmsmuxwtpyuRAHnW4jwA1%40webmail.df.eu.
For more options, visit https://groups.google.com/d/optout.