Re: [gentoo-user] [OT] BIOS Best settings (no o.c.) for RYZEN 5 3600 / MSI Tomahawk max ?

2020-05-15 Thread tuxic
On 05/16 12:46, Dale wrote:
> tu...@posteo.de wrote:
> > Hi,
> >
> > I am trying to figure out the best settings (performance wise) for a
> > AMD Ryzen 5 3600 with a MSI Tomahawk max motherboard.
> >
> > I don't want to overclock -- tweaking the bios is for finding
> > the optimal setting in oposite to waste performance via sub-optimal
> > settings like not activateing XMP profile and running the RAM at
> > JEDEC speeds instead of what the vendor guaranties.
> >
> > Unfortunately, there are quite a view settings to which I didn't 
> > find any explanation, for what they are good.
> >
> > Any help is very appreciated! :)
> >
> > Cheers,
> > Meino
> 
> I usually buy boards that can overclock but don't do it.  What I usually
> look for once I get my CPU, memory and all installed, the selection for
> optimized settings or something to that effect.  I've always found that
> that setting works pretty darn well.  I had to tweak the IOMMU or
> something setting but other than that, I let it detect the best
> settings.  If I upgrade the BIOS, I repeat that on the first boot up. 
> In my experience, it picks good safe settings that result in stable
> systems. 
> 
> I've never had a MSI mobo, yet, so it may be called something different
> but even Dell and Gateway usually have something similar to choose.  It
> may be worth looking into . 
> 
> Dale
> 
> :-)  :-) 

Hi Dale,

thanks for your info! :)

I think it is called "Auto" with the MSI bios.
When using this, JEDEC timings and a command rate of 2
instead of 1 are choosen for DDR4 (as an example)...which
isn't optimal.

These "Auto" were the reason for better settings.
I think I have to tweak the bios settings by hand...

Cheers!
Meino





Re: [gentoo-user] [OT] BIOS Best settings (no o.c.) for ZEN 5 3600 / MSI Tomahawk max ?

2020-05-15 Thread Dale
tu...@posteo.de wrote:
> Hi,
>
> I am trying to figure out the best settings (performance wise) for a
> AMD Ryzen 5 3600 with a MSI Tomahawk max motherboard.
>
> I don't want to overclock -- tweaking the bios is for finding
> the optimal setting in oposite to waste performance via sub-optimal
> settings like not activateing XMP profile and running the RAM at
> JEDEC speeds instead of what the vendor guaranties.
>
> Unfortunately, there are quite a view settings to which I didn't 
> find any explanation, for what they are good.
>
> Any help is very appreciated! :)
>
> Cheers,
> Meino

I usually buy boards that can overclock but don't do it.  What I usually
look for once I get my CPU, memory and all installed, the selection for
optimized settings or something to that effect.  I've always found that
that setting works pretty darn well.  I had to tweak the IOMMU or
something setting but other than that, I let it detect the best
settings.  If I upgrade the BIOS, I repeat that on the first boot up. 
In my experience, it picks good safe settings that result in stable
systems. 

I've never had a MSI mobo, yet, so it may be called something different
but even Dell and Gateway usually have something similar to choose.  It
may be worth looking into . 

Dale

:-)  :-) 


[gentoo-user] [OT] BIOS Best settings (no o.c.) for ZEN 5 3600 / MSI Tomahawk max ?

2020-05-15 Thread tuxic
Hi,

I am trying to figure out the best settings (performance wise) for a
AMD Ryzen 5 3600 with a MSI Tomahawk max motherboard.

I don't want to overclock -- tweaking the bios is for finding
the optimal setting in oposite to waste performance via sub-optimal
settings like not activateing XMP profile and running the RAM at
JEDEC speeds instead of what the vendor guaranties.

Unfortunately, there are quite a view settings to which I didn't 
find any explanation, for what they are good.

Any help is very appreciated! :)

Cheers,
Meino






Re: [gentoo-user] How can I force emerge to use python 3.6?

2020-05-15 Thread William Kenworthy

On 16/5/20 11:34 am, Mike Gilbert wrote:
> On Fri, May 15, 2020 at 10:17 PM William Kenworthy  wrote:
>> Hi Victor,
>>
>> emerge crashes when it tries to add metadata during the merge stage
>> in an emerge installed python module using 3.7 when the PKGDIR is on a
>> moosefs share.  When PKGDIR is local its fine.
> I have never heard of moosefs. It probably returns an error when
> portage tries to use some of the more advanced file syscalls. Please
> file a bug report.
>
I will file a bug - just trying to make sure its a real bug and not just
me :)

moosefs has been great so far, but its posix support may be incomplete
for some corner cases - its strength is large scale redundant storage
and not substituting for local storage but its been doing ok for some
months now up until the near apocalyptic mess that the python upgrade
has become for me ...

BillK




pEpkey.asc
Description: application/pgp-keys


Re: [gentoo-user] How can I force emerge to use python 3.6?

2020-05-15 Thread Mike Gilbert
On Fri, May 15, 2020 at 10:17 PM William Kenworthy  wrote:
>
> Hi Victor,
>
> emerge crashes when it tries to add metadata during the merge stage
> in an emerge installed python module using 3.7 when the PKGDIR is on a
> moosefs share.  When PKGDIR is local its fine.

I have never heard of moosefs. It probably returns an error when
portage tries to use some of the more advanced file syscalls. Please
file a bug report.



Re: [gentoo-user] How can I force emerge to use python 3.6?

2020-05-15 Thread Mike Gilbert
On Fri, May 15, 2020 at 10:17 PM William Kenworthy  wrote:
>
> Hi Victor,
>
> emerge crashes when it tries to add metadata during the merge stage
> in an emerge installed python module using 3.7 when the PKGDIR is on a
> moosefs share.  When PKGDIR is local its fine.
>
> I am rebuilding some systems now with 3.6 as the PYTHON_SINGLE_TARGET
> but I was hoping for some way to specify emerge use 3.7 or 3.6 without
> having to rebuild portage and all its dependencies everytime I want to
> switch and test ...

Set PYTHON_TARGETS=pytthon3_6. This will ensure portage and its
dependencies are built for python3.6.

You can then set python3.6 as the default interpreter globally by
putting it first in /etc/python-exec/python-exec.conf. There's a
comment with instructions at the top of that file.

If you only want emerge to use python3.6, put it in
/etc/python-exec/emerge.conf instead.



Re: [gentoo-user] How can I force emerge to use python 3.6?

2020-05-15 Thread William Kenworthy
Hi Victor,

    emerge crashes when it tries to add metadata during the merge stage
in an emerge installed python module using 3.7 when the PKGDIR is on a
moosefs share.  When PKGDIR is local its fine.

I am rebuilding some systems now with 3.6 as the PYTHON_SINGLE_TARGET
but I was hoping for some way to specify emerge use 3.7 or 3.6 without
having to rebuild portage and all its dependencies everytime I want to
switch and test ...

I have a some systems with common hardware where I can use a buildhost
and the moosefs share is an easy way to access the built packages across
the network

Bill K.


On 16/5/20 9:55 am, Victor Ivanov wrote:
> Why do you think emerge might be the issue? It's quite rare for portage
> itself to be causing problems with packages.
>
> That said, if you have good reason to believe so you can adjust the
> PYTHON_TARGETS for sys-apps/portage in /etc/portage/package.use like so:
>
> sys-apps/portage PYTHON_TARGETS: +python3_6 -python3_7
>
> Or you can keep both. You will of course need to rebuild portage
> following this change.
>
> Likewise, you use the syntax of the above entry to adjust the python
> targets for any package.
>
> Your default interpreter choice (as reported by eselect) is likely not
> respected by portage because the current profile defaults only build
> portage against python 3.7.
>
> - Victor
>
> On 16/05/2020 02:32, William Kenworthy wrote:
>> How can I force emerge to use python 3.6 when 3.7 is installed? -
>> eselect list shows 3.6 is #1 and 3.7 as fallback so that doesn't work.
>>
>> I am trying to narrow down a failure which appears to be a combination
>> of building packages that are stored on a moosefs network share and
>> python 3.7
>>
>> BillK
>>
>>


pEpkey.asc
Description: application/pgp-keys


Re: [gentoo-user] How can I force emerge to use python 3.6?

2020-05-15 Thread Victor Ivanov
Why do you think emerge might be the issue? It's quite rare for portage
itself to be causing problems with packages.

That said, if you have good reason to believe so you can adjust the
PYTHON_TARGETS for sys-apps/portage in /etc/portage/package.use like so:

sys-apps/portage PYTHON_TARGETS: +python3_6 -python3_7

Or you can keep both. You will of course need to rebuild portage
following this change.

Likewise, you use the syntax of the above entry to adjust the python
targets for any package.

Your default interpreter choice (as reported by eselect) is likely not
respected by portage because the current profile defaults only build
portage against python 3.7.

- Victor

On 16/05/2020 02:32, William Kenworthy wrote:
> How can I force emerge to use python 3.6 when 3.7 is installed? -
> eselect list shows 3.6 is #1 and 3.7 as fallback so that doesn't work.
> 
> I am trying to narrow down a failure which appears to be a combination
> of building packages that are stored on a moosefs network share and
> python 3.7
> 
> BillK
> 
> 



signature.asc
Description: OpenPGP digital signature


[gentoo-user] How can I force emerge to use python 3.6?

2020-05-15 Thread William Kenworthy
How can I force emerge to use python 3.6 when 3.7 is installed? -
eselect list shows 3.6 is #1 and 3.7 as fallback so that doesn't work.

I am trying to narrow down a failure which appears to be a combination
of building packages that are stored on a moosefs network share and
python 3.7

BillK




pEpkey.asc
Description: application/pgp-keys


Re: [gentoo-user] Re: Building packages in different prefix without rebuilding system packages

2020-05-15 Thread François-Xavier Carton
On Fri, May 15, 2020 at 12:53:16PM +0200, Michael Haubenwallner wrote:
> Hi François-Xavier,
> 
> What you're after is known as "Prefix/Stack", where you have some "base"
> Prefix, which's portage does manage packages in another - stacked - Prefix.
> 
> While this does work already with "Prefix/Guest" as base Prefix, there is no
> technical reason to not work with "Prefix/Standalone" or even "Gentoo Linux"
> as the base Prefix.  The only problem maybe is that "Prefix/Guest" does use
> a portage version with additional patches.
> 
> But still, you can get an idea about how this works, using "Prefix/Guest":
> 
>  $ docker run -t -i gentooprefix/prefix-guest-64bit-fedora -c /bin/bash
> 
> At the docker prompt:
> 
> Enter the Guest Prefix preinstalled to /tmp/gentoo:
>  $ /tmp/gentoo/startprefix
> 
> Set up your stacked Prefix:
>  $ prefix-stack-setup --eprefix=$HOME/stack 
> --profile=/tmp/gentoo/etc/portage/make.profile
> Note that "~/stack" will not work here, bash refuses to resolve '~' after '='.
> 
> Leave the Guest Prefix:
>  $ exit
> 
> Enter your just created stacked Prefix:
>  $ ~/stack/startprefix
> 
> Emerge your package, for example:
>  $ emerge sys-libs/zlib
> 
> Have fun!
> 
> HTH,
> /haubi/
> 

Thanks, this looks great! I'll play with it and see how it works :)



Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread Rich Freeman
On Fri, May 15, 2020 at 11:27 AM Wols Lists  wrote:
>
> The crucial point here is that dm-integrity protects against something
> *outside* your stack trashing part of the disk. If something came along
> and wrote randomly to /dev/sda, then when my filesystem tried to
> retrieve a file, dm-integrity would cause sda to return a read error,
> raid would say "oops", read it from sdb, and rewrite sda.

Yup.  I understand what it does.

Main reason to use it would be if it performs better than zfs which
offers the same guarantees.  That is why I was talking about whether
lizardfs does overwrites in-place or not - that is where I'd expect
zfs to potentially have performance issues.

Most of the protections in lizardfs happen above the single-host level
- I can smash a single host with a hammer while it is operating and it
shouldn't cause more than a very slight delay and trigger a rebalance.
That also helps protect against failure modes like HBA failures that
could take out multiple disks at once (though to be fair you can
balance your mirrors across HBAs if you have more than one).

-- 
Rich



Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread Wols Lists
On 15/05/20 14:49, Rich Freeman wrote:
> On Fri, May 15, 2020 at 9:18 AM antlists  wrote:
>>
>> On 15/05/2020 12:30, Rich Freeman wrote:

I've snipped it, but I can't imagine dracut/mdadm having the problems
you describe today - there are too many systems out there that boot from
lvm/mdadm. My problem is I'm adding dm-integrity to the mix ...
> 
> So, compared to what you're doing I could see the following advantages:
> 
> 1.  All the filesystem-layer stuff which obviously isn't in-scope for
> the lower layers, including snapshots (obviously those can be done
> with lvm but it is a bit cleaner at the filesystem level).  I'd argue
> that some of this stuff isn't as flexible as with btrfs but it will be
> far superior to something like ext4 on top of what you're doing.
> 
> 2.  No RAID write-hole.  I'd think that your solution with the
> integrity layer would detect corruption resulting from the write hole,
> but I don't think it could prevent it, since a RAID stripe is still
> overwritten in place.  But, I've never had a conversation with an
> md-raid developer so perhaps you have a more educated view on the
> matter.

I don't know as it would. The write hole is where all the blocks are
intact, but not all of them make it to disk. That said, the write hole
has been pretty much fixed now - I think new raids use journalling which
deals with it. That's certainly been discussed on the list.
> 
> 3.  COW offers some of the data-integrity benefits of full data
> journaling without the performance costs of this.  On the other hand
> it probably is not going to perform as well as overwriting in place
> without any data journaling.  In theory this is more of a
> filesystem-level feature though.
> 
> 4.  In the future COW with zfs could probably enable better
> performance on SSD/SMR with TRIM by structuring writes to consolidate
> free blocks into erase zones.  However, as far as I'm aware that is a
> theoretical future benefit and not anything available, and I have no
> idea if anybody is working on that.  This sort of benefit would
> require the vertical integration that zfs uses.
> 
> In general zfs is much more stable than btrfs and far less likely to
> eat your data.  And FWIW I did once (many years ago) have
> ext4+lvm+mdadm eat my data - I think it was due to some kind of lvm
> metadata corruption or something like that, because basically an fsck
> on one ext4 partition scrambled a different ext4 partition, which
> obviously should not be possible if lvm is working right.  I have no
> idea what the root cause of that was - could have been bad RAM or
> something which of course can mess up anything short of a distributed
> filesystem with integrity checking above the host level (which, IMO,
> most of the solutions don't do as well as they could).
> 
> One big disadvantage with zfs is that it is far less flexible at the
> physical layer.  You can add the equivalent of LVM PVs, and you can
> expand a PV, but you can't remove a PV in anything but the latest
> version of zfs, and I think there are some limitations around how this
> works.  You can't reshape the equivalent of an mdadm array, but you
> can replace a drive in an array and grow an array if all the
> underlying devices have enough space.  You can add/remove mirrors from
> the equivalent of a raid1 to freely go between no-redundancy to any
> multiplicity you wish.  Striped arrays are basically fixed in layout
> once created.
> 
>> As the linux raid wiki says (I wrote it :-) do you want the complexity
>> of a "do it all" filesystem, or the abstraction of dedicated layers?
> 
> Yeah, it is a well-established argument and has some merit.
> 
> I'm not sure I'd go this route for my regular hosts since zfs works
> reasonably well (though your solution is more flexible than zfs).
> 
> However, I might evaluate how dm-integrity plus ext4 (maybe with LVM
> in-between) works on my lizardfs chunkservers.  These have redundancy
> above the host level, but I do want integrity checking for static data
> issues, and I'm not sure that lizardfs provides any guarantees here
> (plus having it at the host level would probably perform better
> anyway).  If the integrity layer returned an io error lizardfs would
> just overwrite the impacted files in-place most likely, so there would
> be no reads from the impacted block until it was rewritten which
> presumably would clear the integrity error.
> 
> That said, I'm not sure that lizardfs even overwrites anything
> in-place in normal use so it might not make any difference vs zfs.  It
> breaks all data into "chunks" and I'd think that if data were
> overwritten in place at the filesystem level it probably would end up
> in a new chunk, with the old one garbage collected if it were not
> snapshotted.
> 
The crucial point here is that dm-integrity protects against something
*outside* your stack trashing part of the disk. If something came along
and wrote randomly to /dev/sda, then when my filesystem tried to
retrieve a file, 

Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread Rich Freeman
On Fri, May 15, 2020 at 9:18 AM antlists  wrote:
>
> On 15/05/2020 12:30, Rich Freeman wrote:
> > The actual problem that this module solves is no-doubt long solved
> > upstream, but here is the blog post on dracut modules (which is fairly
> > well-documented in the official docs as well):
> > https://rich0gentoo.wordpress.com/2012/01/21/a-quick-dracut-module/
>
> I don't think it is ... certainly I'm not aware of anything other than
> LUKS that uses dm-integrity, and LUKS sets it up itself.

I was referring to my specific problem in the blog article with mdadm
not actually detecting drives for whatever reason.

I no longer use md-raid on any of my systems so I can't vouch for
whether it is still an issue, but something like that was probably
fixed somewhere.

> > If your module is reasonably generic you could probably get upstream
> > to merge it as well.
>
> No. Like LUKS, I intend to merge the code into mdadm and let the raid
> side handle it. If mdadm detects a dm-integrity/raid setup, it'll set up
> dm-integrity and then recurse to set up raid.

Seems reasonable enough, though you could probably argue for
separation of concerns to do it in dracut.  In any case, I do suspect
the dracut folks would consider such a use case valid for inclusion in
the default package if you do want to have a module for it.

> openSUSE is my only experience of btrfs. And it hasn't been nice. When
> it goes wrong it's nasty. Plus only raid 1 really works - I've heard
> that 5 and 6 have design flaws which means it will be very hard to get
> them to work properly.

Yeah, I moved away from btrfs as well for the same reasons.  I got
into it years ago thinking that it was still a bit unpolished but
seemed to be rapidly gaining traction.  For whatever reason they never
got regressions under control and I got burned more than once by it.
I did keep backups but restoration is of course painful.

> I've never met zfs.

So, compared to what you're doing I could see the following advantages:

1.  All the filesystem-layer stuff which obviously isn't in-scope for
the lower layers, including snapshots (obviously those can be done
with lvm but it is a bit cleaner at the filesystem level).  I'd argue
that some of this stuff isn't as flexible as with btrfs but it will be
far superior to something like ext4 on top of what you're doing.

2.  No RAID write-hole.  I'd think that your solution with the
integrity layer would detect corruption resulting from the write hole,
but I don't think it could prevent it, since a RAID stripe is still
overwritten in place.  But, I've never had a conversation with an
md-raid developer so perhaps you have a more educated view on the
matter.

3.  COW offers some of the data-integrity benefits of full data
journaling without the performance costs of this.  On the other hand
it probably is not going to perform as well as overwriting in place
without any data journaling.  In theory this is more of a
filesystem-level feature though.

4.  In the future COW with zfs could probably enable better
performance on SSD/SMR with TRIM by structuring writes to consolidate
free blocks into erase zones.  However, as far as I'm aware that is a
theoretical future benefit and not anything available, and I have no
idea if anybody is working on that.  This sort of benefit would
require the vertical integration that zfs uses.

In general zfs is much more stable than btrfs and far less likely to
eat your data.  And FWIW I did once (many years ago) have
ext4+lvm+mdadm eat my data - I think it was due to some kind of lvm
metadata corruption or something like that, because basically an fsck
on one ext4 partition scrambled a different ext4 partition, which
obviously should not be possible if lvm is working right.  I have no
idea what the root cause of that was - could have been bad RAM or
something which of course can mess up anything short of a distributed
filesystem with integrity checking above the host level (which, IMO,
most of the solutions don't do as well as they could).

One big disadvantage with zfs is that it is far less flexible at the
physical layer.  You can add the equivalent of LVM PVs, and you can
expand a PV, but you can't remove a PV in anything but the latest
version of zfs, and I think there are some limitations around how this
works.  You can't reshape the equivalent of an mdadm array, but you
can replace a drive in an array and grow an array if all the
underlying devices have enough space.  You can add/remove mirrors from
the equivalent of a raid1 to freely go between no-redundancy to any
multiplicity you wish.  Striped arrays are basically fixed in layout
once created.

> As the linux raid wiki says (I wrote it :-) do you want the complexity
> of a "do it all" filesystem, or the abstraction of dedicated layers?

Yeah, it is a well-established argument and has some merit.

I'm not sure I'd go this route for my regular hosts since zfs works
reasonably well (though your solution is more flexible than zfs).

However, 

Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread antlists

On 15/05/2020 12:30, Rich Freeman wrote:

On Fri, May 15, 2020 at 7:16 AM antlists  wrote:


On 15/05/2020 11:20, Neil Bothwick wrote:


Or you can create a custom module, they are just shell scripts. I recall
reading a blog post by Rich on how to do this a few years ago.


My custom module calls a shell script, so it shouldn't be that hard from
what you say. I then need to make sure the program it invokes
(integritysetup) is in the initramfs?


The actual problem that this module solves is no-doubt long solved
upstream, but here is the blog post on dracut modules (which is fairly
well-documented in the official docs as well):
https://rich0gentoo.wordpress.com/2012/01/21/a-quick-dracut-module/


I don't think it is ... certainly I'm not aware of anything other than 
LUKS that uses dm-integrity, and LUKS sets it up itself.


Basically you have a shell script that tells dracut when building the
initramfs to include in it whatever you need.  Then you have the phase
hooks that actually run whatever you need to run at the appropriate
time during boot (presumably before the mdadm stuff runs).

My example doesn't install any external programs, but there is a
simple syntax for that.

If your module is reasonably generic you could probably get upstream
to merge it as well.


No. Like LUKS, I intend to merge the code into mdadm and let the raid 
side handle it. If mdadm detects a dm-integrity/raid setup, it'll set up 
dm-integrity and then recurse to set up raid.


Good luck with it, and I'm curious as to how you like this setup vs
something more "conventional" like zfs/btrfs.  I'm using single-volume
zfs for integrity for my lizardfs chunkservers and it strikes me that
maybe dm-integrity could accomplish the same goal with perhaps better
performance (and less kernel fuss).  I'm not sure I'd want to replace
more general-purpose zfs with this, though the flexibility of
lvm+mdadm is certainly attractive.

openSUSE is my only experience of btrfs. And it hasn't been nice. When 
it goes wrong it's nasty. Plus only raid 1 really works - I've heard 
that 5 and 6 have design flaws which means it will be very hard to get 
them to work properly. I've never met zfs.


As the linux raid wiki says (I wrote it :-) do you want the complexity 
of a "do it all" filesystem, or the abstraction of dedicated layers?


The big problem that md-raid has is that it has no way of detecting or 
dealing with corruption underneath. Hence me wanting to put dm-integrity 
underneath, because that's dedicated to detecting corruption. So if 
something goes wrong, the raid gets a read error and sorts it out.


Then lvm provides the snap-shotting and sort-of-backups etc.

But like all these things, it's learning that's the big problem. With my 
main system, I don't want to experiment. My first gentoo system was an 
Athlon K8 Thunderbird on ext. The next one is my current Athlon X III 
mirrored across two 3TB drives. Now I'm throwing dm-integrity and lvm 
into the mix with two 4TB drives. So I'm going to try and learn KVM ... :-)


Cheers,
Wol



Re: [gentoo-user] jack vs jack2 USE-flag-wise?

2020-05-15 Thread Francesco Turco
On Fri, May 15, 2020, at 13:51, tu...@posteo.de wrote:
> I want to set 'jack' as a default USE flag - bu my system is a 
> multicore/multithreaded on...so I need not jack aka 
> 
> media-sound/jack-audio-connection-kit
> 
> but I think I need this one:
> 
> * media-sound/jack2
> Description: Jackdmp jack implemention for 
> multi-processor machine
> 
> what USE flag pulls this one instead the first one?

Some packages depend on virtual/jack.
The virtual/jack ebuild depends on either media-sound/jack-audio-connection-kit 
or media-sound/jack2.
So if you prefer the latter, just install it and uninstall the former.
But please note that some other packages depend specifically on 
media-sound/jack-audio-connection-kit when the "jack" USE flag is enabled.
So perhaps you cannot avoid keeping both.

-- 
https://fturco.net/



[gentoo-user] jack vs jack2 USE-flag-wise?

2020-05-15 Thread tuxic
Hi,

I want to set 'jack' as a default USE flag - bu my system is a 
multicore/multithreaded on...so I need not jack aka 

media-sound/jack-audio-connection-kit

but I think I need this one:

* media-sound/jack2
Description: Jackdmp jack implemention for multi-processor 
machine

what USE flag pulls this one instead the first one?

Cheers!
Meino






Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread Rich Freeman
On Fri, May 15, 2020 at 7:16 AM antlists  wrote:
>
> On 15/05/2020 11:20, Neil Bothwick wrote:
> >
> > Or you can create a custom module, they are just shell scripts. I recall
> > reading a blog post by Rich on how to do this a few years ago.
> >
> My custom module calls a shell script, so it shouldn't be that hard from
> what you say. I then need to make sure the program it invokes
> (integritysetup) is in the initramfs?

The actual problem that this module solves is no-doubt long solved
upstream, but here is the blog post on dracut modules (which is fairly
well-documented in the official docs as well):
https://rich0gentoo.wordpress.com/2012/01/21/a-quick-dracut-module/

Basically you have a shell script that tells dracut when building the
initramfs to include in it whatever you need.  Then you have the phase
hooks that actually run whatever you need to run at the appropriate
time during boot (presumably before the mdadm stuff runs).

My example doesn't install any external programs, but there is a
simple syntax for that.

If your module is reasonably generic you could probably get upstream
to merge it as well.

Good luck with it, and I'm curious as to how you like this setup vs
something more "conventional" like zfs/btrfs.  I'm using single-volume
zfs for integrity for my lizardfs chunkservers and it strikes me that
maybe dm-integrity could accomplish the same goal with perhaps better
performance (and less kernel fuss).  I'm not sure I'd want to replace
more general-purpose zfs with this, though the flexibility of
lvm+mdadm is certainly attractive.

-- 
Rich



Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread antlists

On 15/05/2020 11:20, Neil Bothwick wrote:

On Fri, 15 May 2020 11:19:06 +0100, Neil Bothwick wrote:


How are you generating the initramfs? If you use dracut, there are
options you can add to it's config directory, such as install_items to
make sure your service files are included.



I presume I'll be using dracut ...


Or you can create a custom module, they are just shell scripts. I recall
reading a blog post by Rich on how to do this a few years ago.


My custom module calls a shell script, so it shouldn't be that hard from 
what you say. I then need to make sure the program it invokes 
(integritysetup) is in the initramfs?


Cheers,
Wol



[gentoo-user] Re: Building packages in different prefix without rebuilding system packages

2020-05-15 Thread Michael Haubenwallner
Hi François-Xavier,

On 5/14/20 7:02 AM, François-Xavier Carton wrote:
> Hi,
> 
> Is there a way of installing packages in a different prefix while still
> using system packages? I've tried setting EPREFIX, however doing that
> will install all dependencies in the prefix, even if there are already
> installed in the system.
> 
> I was hoping to install some packages in user directories, but I also
> don't want to duplicate the packages installed globally. For example,
> most packages eventually depend on gcc, which I definitely don't want to
> compile twice. So ideally, only dependencies that are not installed
> globally should be pulled in.
> 
> I was not able to find a way of doing that, but I feel like it shouldn't
> be too hard, because EPREFIX almost does what I want. Does someone know
> if it's possible without too much tweaking?

What you're after is known as "Prefix/Stack", where you have some "base"
Prefix, which's portage does manage packages in another - stacked - Prefix.

While this does work already with "Prefix/Guest" as base Prefix, there is no
technical reason to not work with "Prefix/Standalone" or even "Gentoo Linux"
as the base Prefix.  The only problem maybe is that "Prefix/Guest" does use
a portage version with additional patches.

But still, you can get an idea about how this works, using "Prefix/Guest":

 $ docker run -t -i gentooprefix/prefix-guest-64bit-fedora -c /bin/bash

At the docker prompt:

Enter the Guest Prefix preinstalled to /tmp/gentoo:
 $ /tmp/gentoo/startprefix

Set up your stacked Prefix:
 $ prefix-stack-setup --eprefix=$HOME/stack 
--profile=/tmp/gentoo/etc/portage/make.profile
Note that "~/stack" will not work here, bash refuses to resolve '~' after '='.

Leave the Guest Prefix:
 $ exit

Enter your just created stacked Prefix:
 $ ~/stack/startprefix

Emerge your package, for example:
 $ emerge sys-libs/zlib

Have fun!

HTH,
/haubi/



Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread Neil Bothwick
On Fri, 15 May 2020 11:19:06 +0100, Neil Bothwick wrote:

> How are you generating the initramfs? If you use dracut, there are
> options you can add to it's config directory, such as install_items to
> make sure your service files are included.

Or you can create a custom module, they are just shell scripts. I recall
reading a blog post by Rich on how to do this a few years ago.


-- 
Neil Bothwick

If the cops arrest a mime, do they tell her she has the right to remain
silent?


pgphgTXd8FRgl.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread Neil Bothwick
On Fri, 15 May 2020 09:55:57 +0100, Wols Lists wrote:

> So currently I have
> 
> sdb
> --> sdb3
>--> dm-integity
>   --> md-raid
>  --> lvm
> --> root  
> 
> And my root partition is on lvm. Currently I have a custom systemd
> config file that sets up dm-integrity. How do I add this to the gentoo
> initramfs? Without it raid won't recognise the disk, so there'll be no
> root partition to switch to at boot.

How are you generating the initramfs? If you use dracut, there are
options you can add to it's config directory, such as install_items to
make sure your service files are included.


-- 
Neil Bothwick

Last words of a Windows user: = Where do I have to click now? - There?


pgp8Kg5FFUOai.pgp
Description: OpenPGP digital signature


[gentoo-user] New system, systemd, and dm-integrity

2020-05-15 Thread Wols Lists
I'm finally building my new system, but I'm pretty certain I'll need
some advice to get it to boot. As you might guess from the subject the
"problem" is dm-integrity.

I'm using openSUSE as my host system, which I used to set up the disk(s).

So currently I have

sdb
--> sdb3
   --> dm-integity
  --> md-raid
 --> lvm
--> root

And my root partition is on lvm. Currently I have a custom systemd
config file that sets up dm-integrity. How do I add this to the gentoo
initramfs? Without it raid won't recognise the disk, so there'll be no
root partition to switch to at boot.

Plan for the future is to add dm-integrity recognition to upstream
mdadm, but for that I need my new system, so's I can demote my old
system to a test-bed.

Cheers,
Wol