[gentoo-user] The meaning of number in brackets in /proc/cpuinfo power management?

2013-09-20 Thread Pandu Poluan
Hello list!

Does anyone know the meaning of the 'number between brackets' in the
power management line of /proc/cpuinfo?

For instance (I snipped the flags line to not clutter the email:

processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 21
model   : 2
model name  : AMD Opteron(tm) Processor 6386 SE
stepping: 0
cpu MHz : 2800.110
cache size  : 2048 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 13
wp  : yes
flags   : --snip--
bogomips: 5631.71
clflush size: 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate [9] [10]

What's [9] and [10] supposed to mean?

(Note: The OS is not actually Gentoo, but this list is sooo
knowledgeable, and methinks the output of /proc/cpuinfo is quite
universal...)


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



LVM2+mdraid+systemd (was Re: [gentoo-user] systemd and lvm)

2013-09-20 Thread Canek Peláez Valdés
On Fri, Sep 13, 2013 at 7:42 AM, Stefan G. Weichinger li...@xunil.at wrote:
 Am 12.09.2013 20:23, schrieb Canek Peláez Valdés:

 Stefan, what initramfs are you using?

 dracut, run via your kerninst-script.

 Could you please explain how is exactly your layout? From drives to
 partitions to PVs, VGs and LVs? And throw in there also the LUKS and
 RAID (if used) setup. I will try to replicate that in a VM. Next week,
 since we have a holiday weekend coming.

 thanks for your offer.

 I wil happily list my setup BUT let me tell at first that the latest
 sys-fs/lvm2-2.02.100 seems to have fixed that semaphore-issue.

 After booting my desktop with it I quickly tested:

 # lvcreate -n test -L1G VG03
   Logical volume test created
 #

 fine!

 Three times ok ...

 But I still face the fact that the LVs weren't activated at boot time.
 Manual vgchange -ay needed ... or that self-written lvm.service
 enabled as mentioned somewhat earlier.

 Here my setup:

 # lsblk
 NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
 sda8:00 931,5G  0 disk
 ├─sda1 8:10 2M  0 part
 ├─sda2 8:20 2G  0 part  [SWAP]
 ├─sda3 8:30   600G  0 part
 │ └─md127  9:127  0 595,1G  0 raid1
 │   ├─VG03-music 253:00   190G  0 lvm   /mnt/music
 │   ├─VG03-platz 253:10   200G  0 lvm   /mnt/platz
 │   ├─VG03-media 253:2045G  0 lvm   /mnt/media
 │   ├─VG03-home  253:3030G  0 lvm
 │   ├─VG03-oopsfiles 253:4012G  0 lvm   /mnt/oopsfiles
 │   ├─VG03-dropbox   253:50 5G  0 lvm   /mnt/dropbox
 │   ├─VG03-distfiles 253:6010G  0 lvm   /usr/portage/distfiles
 │   ├─VG03-gentoo32  253:7015G  0 lvm   /mnt/gentoo32
 │   ├─VG03-xp253:8040G  0 lvm
 │   └─VG03-test  253:90 1G  0 lvm
 └─sda6 8:6050G  0 part
   └─md49:4050G  0 raid1
 sdb8:16   0 931,5G  0 disk
 ├─sdb1 8:17   0   100M  0 part
 ├─sdb2 8:18   0  98,8G  0 part
 ├─sdb3 8:19   050G  0 part
 │ └─md49:4050G  0 raid1
 ├─sdb4 8:20   0  12,4G  0 part
 └─sdb6 8:22   0 595,1G  0 part
   └─md127  9:127  0 595,1G  0 raid1
 ├─VG03-music 253:00   190G  0 lvm   /mnt/music
 ├─VG03-platz 253:10   200G  0 lvm   /mnt/platz
 ├─VG03-media 253:2045G  0 lvm   /mnt/media
 ├─VG03-home  253:3030G  0 lvm
 ├─VG03-oopsfiles 253:4012G  0 lvm   /mnt/oopsfiles
 ├─VG03-dropbox   253:50 5G  0 lvm   /mnt/dropbox
 ├─VG03-distfiles 253:6010G  0 lvm   /usr/portage/distfiles
 ├─VG03-gentoo32  253:7015G  0 lvm   /mnt/gentoo32
 ├─VG03-xp253:8040G  0 lvm
 └─VG03-test  253:90 1G  0 lvm
 sdc8:32   0  55,9G  0 disk
 ├─sdc1 8:33   025G  0 part  /
 ├─sdc2 8:34   0 2G  0 part
 └─sdc3 8:35   0  28,9G  0 part  /home
 sr0   11:01  1024M  0 rom



 This pretty much says it all, right?

 2 hdds sda and sdb
 1 ssd sdc

 root-fs and /home on ssd ...

 sda and sdb build two RAID-arrays (rather ugly names and partitions ...
 grown over time):

 # cat /proc/mdstat
 Personalities : [raid1]
 md4 : active raid1 sdb3[0] sda6[2]
   52395904 blocks super 1.2 [2/2] [UU]

 md127 : active raid1 sdb6[0] sda3[1]
   623963072 blocks [2/2] [UU]

 unused devices: none


 # pvs
   PV VG   Fmt  Attr PSize   PFree
   /dev/md127 VG03 lvm2 a--  595,05g 47,05g

Sorry I took my time, I was busy.

Well, yours' a complex setup. This is a similar, although simpler, version:

NAME MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sr0   11:01 1024M  0 rom
vda  253:005G  0 disk
|-vda1   253:10   95M  0 part  /boot
|-vda2   253:20  1.9G  0 part  [SWAP]
`-vda3   253:303G  0 part  /home
vdb  253:16   05G  0 disk
`-vdb1   253:17   05G  0 part  /
vdc  253:32   05G  0 disk
`-vdc1   253:33   05G  0 part
  `-md127  9:127  05G  0 raid1
|-vg-vol1 (dm-0) 254:002G  0 lvm   /home/canek/Music
|-vg-vol2 (dm-1) 254:102G  0 lvm   /home/canek/Pictures
`-vg-vol3 (dm-2) 254:20 1020M  0 lvm   /home/canek/Videos
vdd  253:48   05G  0 disk
`-vdd1   253:49   05G  0 part
  `-md127  9:127  05G  0 raid1
|-vg-vol1 (dm-0) 254:002G  0 lvm   /home/canek/Music
|-vg-vol2 (dm-1) 254:102G  0 lvm   /home/canek/Pictures
`-vg-vol3 (dm-2) 254:20 1020M  0 lvm   /home/canek/Videos

/boot on vda1 as ext2, / (root) on vdb1 as ext4, /home on vda3 as
ext4,  vda2 as swap, and vdc1 and vdd1 as 

Re: [gentoo-user] ZFS

2013-09-20 Thread Joerg Schilling
Douglas J Hunley doug.hun...@gmail.com wrote:

 1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
 You'll see people argue for both sides at this size, but the 'saner
 default' would be to use RAIDZ2. You're going to lose storage space, but
 gain an extra parity drive (think RAID6). Consumer grade hard drives are
 /going/ to fail during a resilver (Murphy's Law) and that extra parity
 drive is going to save your bacon.

The main advantage of RAIDZ2 is that you can remove one disk and the RAID is 
still operative. Now you put in a bigger disk. repeat until you replaced 
all disks and you did grow your storage.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-20 Thread Tanstaafl
On 2013-09-20 5:17 AM, Joerg Schilling 
joerg.schill...@fokus.fraunhofer.de wrote:

Douglas J Hunley doug.hun...@gmail.com wrote:


1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
You'll see people argue for both sides at this size, but the 'saner
default' would be to use RAIDZ2. You're going to lose storage space, but
gain an extra parity drive (think RAID6). Consumer grade hard drives are
/going/ to fail during a resilver (Murphy's Law) and that extra parity
drive is going to save your bacon.


The main advantage of RAIDZ2 is that you can remove one disk and the RAID is
still operative. Now you put in a bigger disk. repeat until you replaced
all disks and you did grow your storage.


Interesting, thanks... :)



[gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD

2013-09-20 Thread Tanstaafl

Hi all,

Being that one of the big reasons I stopped using RAID5/6 was the 
rebuild times - can be DAYS for a large array - I am very curious if 
anyone has done, or knows of anyone who has done any tests comparing 
rebuild times when using slow SATA, faster SAS and fastest SSD drives.


Of course, this question is moot if using ZFS RAID, but not every 
situation or circumstance will allow it...


Thanks



Re: [gentoo-user] The meaning of number in brackets in /proc/cpuinfo power management?

2013-09-20 Thread Todd Goodman
* Pandu Poluan pa...@poluan.info [130920 03:45]:
 Hello list!
 
 Does anyone know the meaning of the 'number between brackets' in the
 power management line of /proc/cpuinfo?
 
 For instance (I snipped the flags line to not clutter the email:
 
 processor   : 0
 vendor_id   : AuthenticAMD
 cpu family  : 21
 model   : 2
 model name  : AMD Opteron(tm) Processor 6386 SE
 stepping: 0
 cpu MHz : 2800.110
 cache size  : 2048 KB
 fdiv_bug: no
 hlt_bug : no
 f00f_bug: no
 coma_bug: no
 fpu : yes
 fpu_exception   : yes
 cpuid level : 13
 wp  : yes
 flags   : --snip--
 bogomips: 5631.71
 clflush size: 64
 cache_alignment : 64
 address sizes   : 48 bits physical, 48 bits virtual
 power management: ts ttp tm 100mhzsteps hwpstate [9] [10]
 
 What's [9] and [10] supposed to mean?
 
 (Note: The OS is not actually Gentoo, but this list is sooo
 knowledgeable, and methinks the output of /proc/cpuinfo is quite
 universal...)

I don't know for sure but looking in arch/x86/cpu/{powerflags,proc}.c it
looks like your kernel doesn't have a text description for power flag
bits 9 and 10.

In Linux 3.11.1 they are:

[9] - cpb,  /* core performance boost */
[10] - eff_freq_ro, /* Readonly aperf/mperf */

Todd



Re: LVM2+mdraid+systemd (was Re: [gentoo-user] systemd and lvm)

2013-09-20 Thread Stefan G. Weichinger
Am 20.09.2013 10:46, schrieb Canek Peláez Valdés:

 Sorry I took my time, I was busy.
 
 Well, yours' a complex setup. This is a similar, although simpler, version:

At first: thank your for the extended test setup you did and described
... I will dig through it as soon as I find time ... I am quite busy
these days as well.

Thanks, regards, Stefan!




Re: [gentoo-user] ZFS

2013-09-20 Thread Volker Armin Hemmann
Am 19.09.2013 06:47, schrieb Grant:
 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.
 You are probably not talking about ZFS readahead but about the ARC.
 which does prefetching. So yes.
 I'm taking notes on this so I want to clarify, when using ZFS,
 readahead in the kernel should be disabled by using blockdev to set it
 to 8?

 - Grant

 .

you can't turn it off (afaik) but 8 is a good value - because it is just
a 4k block.



Re: LVM2+mdraid+systemd (was Re: [gentoo-user] systemd and lvm)

2013-09-20 Thread Stefan G. Weichinger

I haven't yet worked through all your suggestions/descriptions.

Edited USE-flags and dracut-modules, worked around bug

https://bugs.gentoo.org/show_bug.cgi?id=485202

and rebuilt kernel and initrd.

Didn't activate LVs ...

Now I edited fstab:

I had the option systemd.automount enabled, like in

/dev/mapper/VG03-media /mnt/media   ext4
noatime,user_xattr,comment=systemd.automount 0 2

The/my idea behind that: the boot-process should not need to wait for
the LVs activated/fscked/mounted ... and my root-fs and /home are both
on the SSD anyway (not LVM-based).

I removed that option and after the next boot the LVs were activated and
mounted (though the booting was a bit slower, as expected).

OK. I send this message now and test another few reboots.

Thanks, Stefan



Re: [gentoo-user] re: duplicated packages

2013-09-20 Thread Alexander Kapshuk
On 09/20/2013 04:37 AM, Dale wrote:
 Alexander Kapshuk wrote:
 On 09/19/2013 10:50 PM, Alan McKinnon wrote:
 On 19/09/2013 20:58, Alexander Kapshuk wrote:
 Howdy,

 Is having duplicate packages a good or a bad thing in gentoo? I'm clear
 about having duplicate packages for the kernel. I'm using the more
 recent one, but hanging on to the old one just in case.

 Perhaps, the reason for having duplicate packages is the fact that
 various packages I have installed on the system may require a different
 version of a package I may already have installed. Is that it?

 _box0=; equery list --duplicates '*'
  * Searching for * ...
 [IP-] [  ] app-text/docbook-xml-dtd-4.1.2-r6:4.1.2
 [IP-] [  ] app-text/docbook-xml-dtd-4.4-r2:4.4
 [IP-] [  ] dev-lang/python-2.7.5-r2:2.7
 [IP-] [  ] dev-lang/python-3.2.5-r2:3.2
 [IP-] [  ] dev-libs/openssl-0.9.8y:0.9.8
 [IP-] [  ] dev-libs/openssl-1.0.1e-r1:0
 [IP-] [  ] media-libs/lcms-1.19:0
 [IP-] [  ] media-libs/lcms-2.3:2
 [IP-] [  ] sys-devel/autoconf-2.13:2.1
 [IP-] [  ] sys-devel/autoconf-2.69:2.5
 [IP-] [  ] sys-devel/automake-1.10.3:1.10
 [IP-] [  ] sys-devel/automake-1.12.6:1.12
 [IP-] [  ] sys-devel/automake-1.13.4:1.13
 [IP-] [M ] sys-kernel/gentoo-sources-3.8.13:3.8.13
 [IP-] [  ] sys-kernel/gentoo-sources-3.10.7:3.10.7
 [IP-] [  ] virtual/libusb-0:0
 [IP-] [  ] virtual/libusb-1:1
 [IP-] [  ] x11-libs/gtk+-2.24.17:2
 [IP-] [  ] x11-libs/gtk+-3.4.4:3
 [IP-] [  ] x11-themes/gtk-engines-xfce-3.0.1-r200:0
 [IP-] [  ] x11-themes/gtk-engines-xfce-3.0.1-r300:3
 _
 Thanks.

 They are not duplicates they are called SLOTS

 And you have them for a reason - one thing needs package A version X,
 something else needs package A version Y. Usually, you can have only
 one, SLOTS let you have more than one that can co-exist.

 Don't worry about them, let them be. You need them.



 No worries.

 Thanks.



 As a example, if you were to remove python2.7, its been nice knowing
 you.  lol  There is no telling what all that would break and that is
 just one package.  I get a shiver up my spine just thinking about it. 
 It's always good to ask about this sort of thing.  The 20/20 rear view
 mirror view can bite.  ;-) 

 Dale

 :-)  :-) 

Nice analogy.

Thanks.




Re: LVM2+mdraid+systemd (was Re: [gentoo-user] systemd and lvm)

2013-09-20 Thread Canek Peláez Valdés
On Fri, Sep 20, 2013 at 11:17 AM, Stefan G. Weichinger li...@xunil.at wrote:

 I haven't yet worked through all your suggestions/descriptions.

 Edited USE-flags and dracut-modules, worked around bug

 https://bugs.gentoo.org/show_bug.cgi?id=485202

 and rebuilt kernel and initrd.

 Didn't activate LVs ...

 Now I edited fstab:

 I had the option systemd.automount enabled, like in

 /dev/mapper/VG03-media /mnt/media   ext4
 noatime,user_xattr,comment=systemd.automount 0 2

 The/my idea behind that: the boot-process should not need to wait for
 the LVs activated/fscked/mounted ... and my root-fs and /home are both
 on the SSD anyway (not LVM-based).

 I removed that option and after the next boot the LVs were activated and
 mounted (though the booting was a bit slower, as expected).

 OK. I send this message now and test another few reboots.

Forgot to mention it: I also enabled mdadm.service.

Regards.
-- 
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México



Re: LVM2+mdraid+systemd (was Re: [gentoo-user] systemd and lvm)

2013-09-20 Thread Stefan G. Weichinger
Am 20.09.2013 18:50, schrieb Canek Peláez Valdés:

 OK. I send this message now and test another few reboots.
 
 Forgot to mention it: I also enabled mdadm.service.

That service is enabled here as well and running fine.



# systemctl status lvm2-activation-net.service
lvm2-activation-net.service - Activation of LVM2 logical volumes
   Loaded: loaded (/etc/lvm/lvm.conf)
   Active: inactive (dead) since Fr 2013-09-20 20:57:15 CEST
 Docs: man:lvm(8)
   man:vgchange(8)
  Process: 580 ExecStart=/sbin/lvm vgchange -aay --sysinit (code=exited,
status=0/SUCCESS)
  Process: 366 ExecStartPre=/usr/bin/udevadm settle (code=exited,
status=0/SUCCESS)
 Main PID: 580 (code=exited, status=0/SUCCESS)

Sep 20 20:57:13 hiro.oops.intern lvm[580]: 10 logical volume(s) in
volume group VG03 now active
Sep 20 20:57:15 hiro.oops.intern systemd[1]: Started Activation of LVM2
logical volumes.


nice ... but not at every boot ... one time they are activated, one time
not.

*sigh*

Thanks for all your patience ... I could live with that lvm.service ;-)

Considering to convert the mdadm-RAID to metadata 1.2 (wouldn't hurt
anyway, right?)

Stefan



Re: [gentoo-user] ZFS

2013-09-20 Thread Grant
 How about hardened?  Does ZFS have any problems interacting with
 grsecurity or a hardened profile?

Has anyone tried hardened and ZFS together?

- Grant



Re: [gentoo-user] Comparing RAID5/6 rebuild times, SATA vs SAS vs SSD

2013-09-20 Thread Paul Hartman
On Fri, Sep 20, 2013 at 6:20 AM, Tanstaafl tansta...@libertytrek.org wrote:
 Hi all,

 Being that one of the big reasons I stopped using RAID5/6 was the rebuild
 times - can be DAYS for a large array - I am very curious if anyone has
 done, or knows of anyone who has done any tests comparing rebuild times when
 using slow SATA, faster SAS and fastest SSD drives.

 Of course, this question is moot if using ZFS RAID, but not every situation
 or circumstance will allow it...

I don't have an all-out comparison, but at least a data point for you
with somewhat cheap and recent hardware. I have a new (2 months old)
home RAID6 made out of:

6 Western Digital Red 3TB SATA drives
LSI 9200-8e SAS JBOD controller
Sans Digital TR8X+B SAS/SATA enclosure w/ SFF-8088 cables

I created a standard linux software RAID6 using mdadm, resulting in
11TB of usable space (4 data drives, 2 parity).

A couple weeks ago one of the drives died. I hot-swap replaced it with
a new one (with no down-time) and the rebuild took exactly 10 hours.

Under normal operation, the speed of the array for contiguous
read/writes is about 600MB/sec, which is faster than my SSD (single
drive, not RAIDed).

FWIW



Re: [gentoo-user] ZFS

2013-09-20 Thread Hinnerk van Bruinehsen
On Fri, Sep 20, 2013 at 11:20:53AM -0700, Grant wrote:
  How about hardened?  Does ZFS have any problems interacting with
  grsecurity or a hardened profile?

 Has anyone tried hardened and ZFS together?


Hi,

I did - I had some problems, but I'm not sure if they were caused by the
combination of ZFS and hardened. There were some issues updating kernel and ZFS
(most likely due to ZFS on root and me using ~arch hardened-sources and the
live ebuild for zfs).
There are some hardened options that are known to be not working (constify was
one of them but that should be patched now). I think another one was HIDESYM.

There is a (more or less regularly updated blogpost by prometheanfire
(installation guide zfs+hardened+luks [1]).
So you could ask him or ryao (he seems to support hardened+zfs at least to
a certain degree).

WKR
Hinnerk


[1] 
https://mthode.org/posts/2013/Sep/gentoo-hardened-zfs-rootfs-with-dm-cryptluks-062/
 


signature.asc
Description: Digital signature


Re: [gentoo-user] ZFS

2013-09-20 Thread Hinnerk van Bruinehsen
On Thu, Sep 19, 2013 at 06:41:47PM -0400, Douglas J Hunley wrote:

 On Tue, Sep 17, 2013 at 12:32 PM, cov...@ccs.covici.com wrote:

 Spo do I need that overlay at all, or just emerge zfs and its module?


 You do *not* need the overlay. Everything you need is in portage nowadays


Afaik the overlay even comes with a warning from ryao not to use it unless
being told by him to do so (since it's very experimental and includes patches
that were not reviewed). Unless you want to do heavy testing (best while
communicating with ryao) you should use the ebuilds from portage.

WKR
Hinnerk


signature.asc
Description: Digital signature


Re: [gentoo-user] ZFS

2013-09-20 Thread Grant
  How about hardened?  Does ZFS have any problems interacting with
  grsecurity or a hardened profile?

 Has anyone tried hardened and ZFS together?

 I did - I had some problems, but I'm not sure if they were caused by the
 combination of ZFS and hardened. There were some issues updating kernel and 
 ZFS
 (most likely due to ZFS on root and me using ~arch hardened-sources and the
 live ebuild for zfs).
 There are some hardened options that are known to be not working (constify was
 one of them but that should be patched now). I think another one was HIDESYM.

 There is a (more or less regularly updated blogpost by prometheanfire
 (installation guide zfs+hardened+luks [1]).
 So you could ask him or ryao (he seems to support hardened+zfs at least to
 a certain degree).
 [1] 
 https://mthode.org/posts/2013/Sep/gentoo-hardened-zfs-rootfs-with-dm-cryptluks-062/

Thanks for the link.  It doesn't look too bad.

- Grant