Re: [gentoo-user] [slightly O/T] mysql problems

2014-10-15 Thread Kerin Millar

On 15/10/2014 13:05, Mick wrote:

On Wednesday 15 Oct 2014 02:14:37 Kerin Millar wrote:

On 14/10/2014 23:25, Mick wrote:

On Tuesday 14 Oct 2014 21:15:48 Kerin Millar wrote:



 * Have you upgraded MySQL recently without going through the
   documented upgrade procedure? [1]


I'm still on mysql-5.5.39


OK. If it has always been running MySQL 5.5, there's nothing to be
concerned about.


No, sorry I wasn't clear.  I have been upgrading mysql on this machine for
some years now, always running stable versions.  After each update I run:

mysql_upgrade -h localhost -u root -p



 * Have you otherwise removed or modified files in the data
 directory?


Not as far as I know.  I have suspicions of fs corruption though (it's
been running out of space lately and I haven't yet found out why).


Not good. Which filesystem, if I may ask? XFS is preferable, due to its
very good performance with O_DIRECT, which ext4 coming in second. Other
filesystems may be problematic. In particular, ZFS does not support
asynchronous I/O.


ext4



In any case, go into /var/lib/mysql and check whether the file that it
mentions exists. If it does not exist, try running:

DROP TABLE `website1@002dnew`.`webform_validation_rule_components`

If that does not work then try again, using DISCARD TABLESPACE as
opposed to DROP TABLE. Note that the backtick quoting is necessary
because of the presence of the @ symbol in the database name, which
would otherwise be misinterpreted.


Hmm ... I'm probably not doing this right.

First of all, there is no local database /var/lib/mysql/website1, because this
is the live website name, on the shared server.  I only have
/var/lib/mysql/website_test on the local dev machine.

Then although I can see, e.g.

-rw-rw  1 mysql mysql 8939 Oct 14 19:25 actions.frm
-rw-rw  1 mysql mysql98304 Oct 14 19:25 actions.ibd

in /var/lib/mysql/website_test, if I try to run DROP TABlE, logged in as
(mysql) root, I get an unknown table, error 1051.

=
mysql USE website_test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql DROP TABLE `website1@002dnew`.`actions`;


Is this a table for which it is also complaining that a corresponding 
tablespace doesn't exist in database `website1@@002dnew`? Your original 
post mentioned only a table named `webform_validation_rule_components`.


Whichever table(s) it is complaining about, if you happen to find a 
corresponding .idb file in a different database (sub-directory), you 
might be able to satisfy MySQL by copying it to where it is expecting to 
find it. If that works, you should then be able to drop it.


Sometimes, directly copying an InnoDB tablespace into place requires a 
more elaborate procedure but I won't muddy the waters by describing said 
procedure just yet.



ERROR 1051 (42S02): Unknown table 'actions'
mysql DISCARD TABLESPACE `website1@002dnew`.`actions`;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use near
'DISCARD TABLESPACE `website1@002dnew`.`actions`' at line 1
=

I think in mysql-5.5 I should be using DROP TABLESPACE instead?



My mistake. The correct syntax for discarding the tablespace would be:

  ALTER TABLE table DISCARD TABLESPACE;

I'm stating the obvious here, but be sure not to DROP or DISCARD 
TABLESPACE on a table whose tablespace does exist and for which you do 
not have a backup. Both commands are destructive.


--Kerin



Re: [gentoo-user] [slightly O/T] mysql problems

2014-10-14 Thread Kerin Millar

On 14/10/2014 19:54, Mick wrote:

Hi All,

This may be slightly off topic, but I thought of asking here first.  I noticed
two problems, one specific to a particular database, the other more general.
In reverse order:


1. I am getting this error when I start mysqld

141014 19:41:38 [Warning] /usr/sbin/mysqld: unknown option '--loose-federated'

Sure enough I seem to have this in /etc/mysql/my.cnf:

# Uncomment this to get FEDERATED engine support
#plugin-load=federated=ha_federated.so
loose-federated

As far as I recall this is a default setting.  Should I change it?


No. I presume that you are not actively using the federated storage 
engine but let's put that aside because there is more to this error than 
meets the eye.


Check your MySQL error log and look for any anomalies from the point at 
which MySQL is started. If you don't know where the log file is, execute 
SELECT @@log_error.


I have several questions:

  * Have you started MySQL with skip-grant-tables in effect?
  * Have you upgraded MySQL recently without going through the
documented upgrade procedure? [1]
  * Have you copied files into MySQL's data directory that originated
from a different version of MySQL?
  * Have you otherwise removed or modified files in the data directory?




2. A particular database which I have imported locally from a live site gives
me loads of this:


The wording here suggests a broader context that would be relevant. 
Please be specific as to the circumstances. What procedure did you 
employ in order to migrate and import the database? What do you mean by 
live site? Which versions of MySQL are running at both source and 
destination? How are they configured?




141014 19:41:37  InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
141014 19:41:37  InnoDB: Error: trying to open a table, but could not
InnoDB: open the tablespace file
'./website1@002dnew/webform_validation_rule_components.ibd'!
InnoDB: Have you moved InnoDB .ibd files around without using the
InnoDB: commands DISCARD TABLESPACE and IMPORT TABLESPACE?
InnoDB: It is also possible that this is a temporary table #sql...,
InnoDB: and MySQL removed the .ibd file for this.


Is this some error imported from the live site, or is it due to something
being wrong locally?


MySQL believes that an InnoDB table named 
webform_validation_rule_components presently exists in a database 
named website1@002dnew but the corresponding tablespace file does not 
exist, relative to the MySQL datadir. The reason for this may become 
clear if you answer the questions posed above.


--Kerin

[1] 
https://dev.mysql.com/doc/refman/5.6/en/upgrading-from-previous-series.html 
(and its predecessors)




Re: [gentoo-user] [slightly O/T] mysql problems

2014-10-14 Thread Kerin Millar

On 14/10/2014 23:25, Mick wrote:

On Tuesday 14 Oct 2014 21:15:48 Kerin Millar wrote:

On 14/10/2014 19:54, Mick wrote:



# Uncomment this to get FEDERATED engine support
#plugin-load=federated=ha_federated.so
loose-federated

As far as I recall this is a default setting.  Should I change it?


No. I presume that you are not actively using the federated storage
engine but let's put that aside because there is more to this error than
meets the eye.

Check your MySQL error log and look for any anomalies from the point at
which MySQL is started. If you don't know where the log file is, execute
SELECT @@log_error.



141014 19:41:37 [Warning] No argument was provided to --log-bin, and --log-
bin-index was not used; so replication may break when this MySQL server acts
as a master and has his hostname changed!! Please use '--log-bin=mysqld-bin'
to avoid this problem.
141014 19:41:37 InnoDB: The InnoDB memory heap is disabled
141014 19:41:37 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 19:41:37 InnoDB: Compressed tables use zlib 1.2.8
141014 19:41:37 InnoDB: Using Linux native AIO
141014 19:41:37 InnoDB: Initializing buffer pool, size = 16.0M
141014 19:41:37 InnoDB: Completed initialization of buffer pool
141014 19:41:37 InnoDB: highest supported file format is Barracuda.
141014 19:41:37  InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.
141014 19:41:37  InnoDB: Error: trying to open a table, but could not
InnoDB: open the tablespace file './website1@002dnew/actions.ibd'!
InnoDB: Have you moved InnoDB .ibd files around without using the
InnoDB: commands DISCARD TABLESPACE and IMPORT TABLESPACE?
InnoDB: It is also possible that this is a temporary table #sql...,
InnoDB: and MySQL removed the .ibd file for this.
InnoDB: Please refer to
InnoDB: 
http://dev.mysql.com/doc/refman/5.5/en/innodb-troubleshooting-datadict.html


Nothing particularly interesting there.




I have several questions:

* Have you started MySQL with skip-grant-tables in effect?


Not knowingly.  How do I find out?


If you had, you would know. It disables the privilege handling system 
outright. Typically it's used in situations where the root password has 
been forgotten or just prior to executing mysql_upgrade.


The reason for asking is that it may also prevent some storage engines 
from loading, in which case their options will not be recognized. In 
turn, this may result in confusing error messages such as the one that 
you encountered.


However, with the benefit of being able to read your my.cnf, the 
explanation turns out to be much simpler. You have loose-federated 
specified as an option but you are not loading the corresponding storage 
plugin. There is also the possibility that the engine was not compiled 
in at all (whether as a plugin or not).


Simply remove or comment the line specifying this option and the error 
should go away.






* Have you upgraded MySQL recently without going through the
  documented upgrade procedure? [1]


I'm still on mysql-5.5.39


OK. If it has always been running MySQL 5.5, there's nothing to be 
concerned about.




  Installed versions:  5.5.39(16:42:22 08/09/14)(community perl ssl -
bindist -cluster -debug -embedded -extraengine -jemalloc -latin1 -max-idx-128
-minimal -profiling -selinux -static -static-libs -systemtap -tcmalloc -test)



* Have you copied files into MySQL's data directory that originated
  from a different version of MySQL?


No, not manually.


Good.





* Have you otherwise removed or modified files in the data directory?


Not as far as I know.  I have suspicions of fs corruption though (it's been
running out of space lately and I haven't yet found out why).


Not good. Which filesystem, if I may ask? XFS is preferable, due to its 
very good performance with O_DIRECT, which ext4 coming in second. Other 
filesystems may be problematic. In particular, ZFS does not support 
asynchronous I/O.






2. A particular database which I have imported locally from a live site
gives me loads of this:

The wording here suggests a broader context that would be relevant.
Please be specific as to the circumstances. What procedure did you
employ in order to migrate and import the database? What do you mean by
live site? Which versions of MySQL are running at both source and
destination? How are they configured?


mysql -u webadmin -h localhost -p website_test  website1_20141014.sql


Ah, just using DDL. That shouldn't have caused any trouble.



The server is on 5.5.36.

website1 is the database name of the live site, and website_test is the local
development database.

The server is a shared server, so I'm getting its vanilla configuration with
no choice on the matter.  The local configuration is attached.



Is this some error imported from

Re: [gentoo-user] Writing to tty01 (serial port) in simple straight forward way...?!?

2014-10-12 Thread Kerin Millar

On 12/10/2014 13:08, meino.cra...@gmx.de wrote:

Hi,

I want to send commands to ttyO1 (serial port on an embedded system).
The commands are one line each and terminated with CRL/LF (aka DOS).

Since this will be done from a batch script, it should be possible
via commandline tools and non-interactively. The serial port is
already setup up the right way.

I tried

echo -nblablabal\x0a\x0d


Firstly, this command is missing the -e switch. Secondly, the order of 
the control characters is wrong. I would suggest the use of printf as it 
has fewer pitfalls.


# printf '%s\r\n' command

--Kerin



Re: [gentoo-user] Handbook missing portage unpacking

2014-10-12 Thread Kerin Millar

On 11/10/2014 21:13, James wrote:

Hello,

I was just following the handbook for an amd64 install; I have
not looked at the handbook in a while. I downloaded the stage3
tarball and the portage-latest tarball at the same time, like I always
have done. The handbook give instructions for untaring the stage3, in
section 5, but not the  portage tarball, or did I miss something?


It specifies that emerge-webrsync be used, automating the process of 
fetching and extracting a portage tarball.


--Kerin




Re: [gentoo-user] VB - login from one Windows XP to another XP

2014-10-07 Thread Kerin Millar

On 07/10/2014 07:12, J. Roeleveld wrote:

On Monday, October 06, 2014 11:17:49 PM Joseph wrote:

  On 10/06/14 21:22, Jc García wrote:

  2014-10-06 19:52 GMT-06:00 Joseph syscon...@gmail.com:

   I'm running Windows XP in VirtualBox.

   I can NX to the running VB - Windows XP as (shadow or new) session.

   But that doesn't help me.

   Via Shadow session I would disturb the current user if I try to
start

   another program.

   Via New session I can not see Windows XP session as it is running.

  

   So I think I have to start VB - Windows XP on my box and try to
login to

   another (remote) VB - Windows XP Is it possible?.

  

   The user is running certain program, that uses database. I'm trying to

   login to the remote Windows XP session and start the same program
as an

   administrator (that uses that same database).

  

   How to log-in from one windows XP to another over the network?

  

  You search in google and do a ton of clicks, this is so OT. and there

  are many answers out here, even the unix-style one works.

 

  Windows XP has a build in Remote Desktop but:

  http://www.wikihow.com/Use-Windows-XP%27s-Built-in-Remote-Desktop-Utility

 

  But I think it only works on the same subnet.

As stated by others, this is way OT and Google should help here.

RDC works over different subnets, even (if you're stupid enough to open
the ports in the firewall) from a different country over the internet.

MS Windows XP is NOT a multi-user OS, what you want to do might work,
but is extremely flakey.


It is a multi-user OS. The problem is that the Remote Desktop Session 
Host component is artificially restricted in non-server editions of 
Windows, preventing concurrent sessions. Those so inclined can remove 
this restriction with an unofficial patch that is fairly easy to find.


--Kerin



Re: [gentoo-user] perl-5.20.1 - has anybody managed to upgrade Perl?

2014-10-07 Thread Kerin Millar

On 07/10/2014 09:13, Helmut Jarausch wrote:

Hi,

dev-lang/perl-5.20.1 is in the tree (unmasked), but trying to upgrade
gives me lots of blocks requiring versions which are not
in the tree, yet, like

[blocks B  ] perl-core/Socket-2.13.0 (perl-core/Socket-2.13.0 is
blocking virtual/perl-Socket-2.13.0)

Has anybody tried to upgrade to this version of Perl?


Here is a generically applicable approach to handling upgrades in 
situations where a major dev-lang/perl update is queued.


1) emerge -auDN @world
2) If previous step fails: emerge -auDN --backtrack=30 @world
3) If previous step fails:
   a) emerge --deselect $(qlist -IC 'perl-core/*')
   b) Return to step #2
4) perl-cleaner -all
5) If previous step fails, follow the instructions in the error message
6) Re-select any perl modules that you may have explicitly requested

The final step won't apply to you unless you a Perl hacker.

--Kerin



Re: [gentoo-user] perl-5.20.1 - has anybody managed to upgrade Perl?

2014-10-07 Thread Kerin Millar

On 07/10/2014 09:31, Kerin Millar wrote:

On 07/10/2014 09:13, Helmut Jarausch wrote:

Hi,

dev-lang/perl-5.20.1 is in the tree (unmasked), but trying to upgrade
gives me lots of blocks requiring versions which are not
in the tree, yet, like

[blocks B  ] perl-core/Socket-2.13.0 (perl-core/Socket-2.13.0 is
blocking virtual/perl-Socket-2.13.0)

Has anybody tried to upgrade to this version of Perl?


Here is a generically applicable approach to handling upgrades in
situations where a major dev-lang/perl update is queued.

1) emerge -auDN @world
2) If previous step fails: emerge -auDN --backtrack=30 @world
3) If previous step fails:
a) emerge --deselect $(qlist -IC 'perl-core/*')
b) Return to step #2
4) perl-cleaner -all
5) If previous step fails, follow the instructions in the error message
6) Re-select any perl modules that you may have explicitly requested

The final step won't apply to you unless you a Perl hacker.


I meant to include the step of running emerge --depclean just after 
successfully concluding steps 4 or 5.


--Kerin



Re: [gentoo-user] another headless device-question: In search of the LAN

2014-09-30 Thread Kerin Millar

On 30/09/2014 12:49, meino.cra...@gmx.de wrote:

Neil Bothwick n...@digimed.co.uk [14-09-30 12:44]:

On Tue, 30 Sep 2014 12:39:06 +0200, meino.cra...@gmx.de wrote:


[I] sys-apps/ifplugd
  Available versions:
 0.28-r9[doc selinux]
  Installed versions:  0.28-r9(11:14:57 12/18/10)(-doc)
  Homepage:
http://0pointer.de/lennart/projects/ifplugd/ Description:
Brings up/down ethernet ports automatically with cable detection



and another alternative would be sys-apps/netplug



WHOW! That all sounds much more easier than I have dreamt of!
I did not thought, that such software exists already -- and therefore
did not search for it...


If you use openrc, you only need to install one of these programs, not
configure or set it to run. OpenRC detects that one of these programs is
available and uses it to do exactly what you need.


--
Neil Bothwick

We are upping our standards - so up yours.


...ok, it works!
...nearly... ;)



Unfortunately, ntp-client is configured via rc-update (added to
default) but after plugging in LAN the interface eth0 comes
up and I have access via ssh...but the date is set to the beginnig
of the UNIX epoch.
I have to call ntp-client by hand.


If you know that net.eth0 is specifically required to be up for 
ntp-client to work, you should render OpenRC aware of the fact:


echo 'rc_need=net.eth0'  /etc/conf.d/ntp-client

--Kerin



Re: [gentoo-user] another headless device-question: In search of the LAN

2014-09-30 Thread Kerin Millar

On 30/09/2014 14:46, Kerin Millar wrote:

On 30/09/2014 14:42, meino.cra...@gmx.de wrote:

Kerin Millar kerfra...@fastmail.co.uk [14-09-30 15:08]:

On 30/09/2014 12:49, meino.cra...@gmx.de wrote:

Neil Bothwick n...@digimed.co.uk [14-09-30 12:44]:

On Tue, 30 Sep 2014 12:39:06 +0200, meino.cra...@gmx.de wrote:


[I] sys-apps/ifplugd
  Available versions:
 0.28-r9[doc selinux]
  Installed versions:  0.28-r9(11:14:57 12/18/10)(-doc)
  Homepage:
http://0pointer.de/lennart/projects/ifplugd/ Description:
Brings up/down ethernet ports automatically with cable detection



and another alternative would be sys-apps/netplug



WHOW! That all sounds much more easier than I have dreamt of!
I did not thought, that such software exists already -- and
therefore
did not search for it...


If you use openrc, you only need to install one of these programs,
not
configure or set it to run. OpenRC detects that one of these programs
is
available and uses it to do exactly what you need.


--
Neil Bothwick

We are upping our standards - so up yours.


...ok, it works!
...nearly... ;)



Unfortunately, ntp-client is configured via rc-update (added to
default) but after plugging in LAN the interface eth0 comes
up and I have access via ssh...but the date is set to the beginnig
of the UNIX epoch.
I have to call ntp-client by hand.


If you know that net.eth0 is specifically required to be up for
ntp-client to work, you should render OpenRC aware of the fact:

echo 'rc_need=net.eth0'  /etc/conf.d/ntp-client

--Kerin



Hi Kerin,

I tried a similiar thing:

#!/sbin/runscript
# Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/ntp/files/ntp-client.rc,v
1.13 2013/12/24 11:01:52 vapier Exp $

depend() {
 before cron portmap
 after eth0
 use dns logger
}


for after XYZ
I set
 net
 net.eth0
 eth0
and none worked for me...


Using 'after' won't work unless both net.eth0 and ntp-client are in the
default runlevel. Obviously, that condition is not satisfied if you are
using ifplugd. Please try the solution mentioned in my previous post. It
should work.


On second thoughts, it might have the unintended affect of starting 
net.eth0 before iplugd does. If you try it, let me know how it goes.


--Kerin



Re: [gentoo-user] another headless device-question: In search of the LAN

2014-09-30 Thread Kerin Millar

On 30/09/2014 14:58, Neil Bothwick wrote:

On Tue, 30 Sep 2014 14:46:46 +0100, Kerin Millar wrote:


depend() {
  before cron portmap
  after eth0
  use dns logger
}


for after XYZ
I set
  net
  net.eth0
  eth0
and none worked for me...


Using 'after' won't work unless both net.eth0 and ntp-client are in the
default runlevel. Obviously, that condition is not satisfied if you are
using ifplugd. Please try the solution mentioned in my previous post.
It should work.


ifplugd shouldn't be in any runlevel, it is just there for openrc to use.


I did not claim or suggest otherwise.

--Kerin



Re: [gentoo-user] another headless device-question: In search of the LAN

2014-09-30 Thread Kerin Millar

On 30/09/2014 14:42, meino.cra...@gmx.de wrote:

Kerin Millar kerfra...@fastmail.co.uk [14-09-30 15:08]:

On 30/09/2014 12:49, meino.cra...@gmx.de wrote:

Neil Bothwick n...@digimed.co.uk [14-09-30 12:44]:

On Tue, 30 Sep 2014 12:39:06 +0200, meino.cra...@gmx.de wrote:


[I] sys-apps/ifplugd
  Available versions:
 0.28-r9[doc selinux]
  Installed versions:  0.28-r9(11:14:57 12/18/10)(-doc)
  Homepage:
http://0pointer.de/lennart/projects/ifplugd/ Description:
Brings up/down ethernet ports automatically with cable detection



and another alternative would be sys-apps/netplug



WHOW! That all sounds much more easier than I have dreamt of!
I did not thought, that such software exists already -- and
therefore
did not search for it...


If you use openrc, you only need to install one of these programs,
not
configure or set it to run. OpenRC detects that one of these programs
is
available and uses it to do exactly what you need.


--
Neil Bothwick

We are upping our standards - so up yours.


...ok, it works!
...nearly... ;)



Unfortunately, ntp-client is configured via rc-update (added to
default) but after plugging in LAN the interface eth0 comes
up and I have access via ssh...but the date is set to the beginnig
of the UNIX epoch.
I have to call ntp-client by hand.


If you know that net.eth0 is specifically required to be up for
ntp-client to work, you should render OpenRC aware of the fact:

echo 'rc_need=net.eth0'  /etc/conf.d/ntp-client

--Kerin



Hi Kerin,

I tried a similiar thing:

#!/sbin/runscript
# Copyright 1999-2013 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: /var/cvsroot/gentoo-x86/net-misc/ntp/files/ntp-client.rc,v 1.13 
2013/12/24 11:01:52 vapier Exp $

depend() {
 before cron portmap
 after eth0
 use dns logger
}


for after XYZ
I set
 net
 net.eth0
 eth0
and none worked for me...


Using 'after' won't work unless both net.eth0 and ntp-client are in the 
default runlevel. Obviously, that condition is not satisfied if you are 
using ifplugd. Please try the solution mentioned in my previous post. It 
should work.


--Kerin



Re: [gentoo-user] another headless device-question: In search of the LAN

2014-09-30 Thread Kerin Millar

On 30/09/2014 15:03, Neil Bothwick wrote:

On Tue, 30 Sep 2014 15:00:39 +0100, Kerin Millar wrote:


Using 'after' won't work unless both net.eth0 and ntp-client are in
the default runlevel. Obviously, that condition is not satisfied if
you are using ifplugd. Please try the solution mentioned in my
previous post. It should work.


ifplugd shouldn't be in any runlevel, it is just there for openrc to
use.


I did not claim or suggest otherwise.


Your suggestion that net.eth0 was not in the default runlevel if using
ifplugd suggested that ifplugd was, otherwise the interface would never
be started.


No it it didn't. I pointed out that his attempt to use 'after' could 
never have worked. I did so by explaining the exact conditions under 
which 'after' would have made a difference.


Aside from that, I am well aware that he is using ifplugd and how it works.

--Kerin



Re: [gentoo-user] bloated by gcc

2014-09-29 Thread Kerin Millar

On 29/09/2014 16:10, Jorge Almeida wrote:

On Mon, Sep 29, 2014 at 3:50 PM, Marc Stürmer m...@marc-stuermer.de wrote:

Am 28.09.2014 10:44, schrieb Jorge Almeida:


I'm having a somewhat disgusting issue on my Gentoo: binaries are
unaccountably large.



Really? Who cares. Storage is so cheap nowadays, that that kind of bloat
simply doesn't matter on normal deskop computers anymore.

Embedded systems though are a different cup of coffee.


I care, that's why I wrote to this list. What I don't care about is
your opinions, no more than you care about mine. Feel free to start a
thread about whatever you find
important/interesting/cool/shining/modern. Bye.


You might consider making contact with the toolchain herd at gentoo or 
filing a bug. I, for one, would be interested to know the outcome.


--Kerin



Re: [gentoo-user] Running a program on a headless computer ?

2014-09-28 Thread Kerin Millar

On 28/09/2014 15:13, meino.cra...@gmx.de wrote:

Hi,

I want to run programs, which insist on haveing a terminal
to write their status to and which are writing files which
their results on a headless computer (beaglebone).

I tried things like

 my_program -o file.txt -parameter value  /dev/null 21 

but this results in a idle copy of this process and a defunct
child.

The program does not use X11 in any way...

Is there any neat trick to accomplish what I am trying to do here?


Take a look at daemonize. It is available in portage.

http://software.clapper.org/daemonize/

--Kerin



Re: [gentoo-user] [Security] Update bash *NOW*

2014-09-25 Thread Kerin Millar

On 25/09/2014 02:58, Walter Dnes wrote:

[snip]


...with malicious stuff, and it could get ugly.  app-shells/bash-4.2_p48
has been pushed to Gentoo stable.  The same env command results in...


Unfortunately, that version did fully address the problem. Instead, 
upgrade to 4.2_p48-r1 or any of the -r1 revision bumps that were 
recently committed. For further details:


https://bugs.gentoo.org/show_bug.cgi?id=523592

--Kerin



Re: [gentoo-user] [Security] Update bash *NOW*

2014-09-25 Thread Kerin Millar

On 25/09/2014 13:54, Kerin Millar wrote:

On 25/09/2014 02:58, Walter Dnes wrote:

[snip]


...with malicious stuff, and it could get ugly.  app-shells/bash-4.2_p48
has been pushed to Gentoo stable.  The same env command results in...


Unfortunately, that version did fully address the problem. Instead,
upgrade to 4.2_p48-r1 or any of the -r1 revision bumps that were
recently committed. For further details:

https://bugs.gentoo.org/show_bug.cgi?id=523592



Oops. Obviously, I meant to write did not fully address the problem.

--Kerin



Re: [gentoo-user] Re: File system testing

2014-09-19 Thread Kerin Millar

On 18/09/2014 14:12, Alec Ten Harmsel wrote:


On 09/18/2014 05:17 AM, Kerin Millar wrote:

On 17/09/2014 21:20, Alec Ten Harmsel wrote:

As far as HDFS goes, I would only set that up if you will use it for
Hadoop or related tools. It's highly specific, and the performance is
not good unless you're doing a massively parallel read (what it was
designed for). I can elaborate why if anyone is actually interested.


I, for one, am very interested.

--Kerin



Alright, here goes:

Rich Freeman wrote:


FYI - one very big limitation of hdfs is its minimum filesize is
something huge like 1MB or something like that.  Hadoop was designed
to take a REALLY big input file and chunk it up.  If you use hdfs to
store something like /usr/portage it will turn into the sort of
monstrosity that you'd actually need a cluster to store.


This is exactly correct, except we run with a block size of 128MB, and a large 
cluster will typically have a block size of 256MB or even 512MB.

HDFS has two main components: a NameNode, which keeps track of which blocks are 
a part of which file (in memory), and the DataNodes that actually store the 
blocks. No data ever flows through the NameNode; it negotiates transfers 
between the client and DataNodes and negotiates transfers for jobs. Since the 
NameNode stores metadata in-memory, small files are bad because RAM gets wasted.

What exactly is Hadoop/HDFS used for? The most common uses are generating 
search indices on data (which is a batch job) and doing non-realtime processing 
of log streams and/or data streams (another batch job) and allowing a large 
number of analysts run disparate queries on the same large dataset (another 
batch job). Batch processing - processing the entire dataset - is really where 
Hadoop shines.

When you put a file into HDFS, it gets split based on the block size. This is 
done so that a parallel read will be really fast - each map task reads in a 
single block and processes it. Ergo, if you put in a 1GB file with a 128MB 
block size and run a MapReduce job, 8 map tasks will be launched. If you put in 
a 1TB file, 8192 tasks would be launched. Tuning the block size is important to 
optimize the overhead of launching tasks vs. potentially under-utilizing a 
cluster. Typically, a cluster with a lot of data has a bigger block size.

The downsides of HDFS:
* Seeked reads are not supported afaik because no one needs that for batch 
processing
* Seeked writes into an existing file are not supported because either blocks 
would be added in the middle of a file and wouldn't be 128MB, or existing 
blocks would be edited, resulting in blocks larger than 128MB. Both of these 
scenarios are bad.

Since HDFS users typically do not need seeked reads or seeked writes, these 
downsides aren't really a big deal.

If something's not clear, let me know.


Thank you for taking the time to explain.

--Kerin



Re: [gentoo-user] File system testing

2014-09-18 Thread Kerin Millar

On 17/09/2014 19:21, J. Roeleveld wrote:

On 17 September 2014 20:10:57 CEST, Hervé Guillemet he...@guillemet.org 
wrote:

Le 16/09/2014 21:07, James a écrit :


By now many are familiar with my keen interest in clustering gentoo
systems. So, what most cluster technologies use is a distributed file
system on top of the local (HD/SDD) file system. Naturally not
all file systems, particularly the distributed file systems, have
straightforward instructions. Also, an device file system, such as
XFS and a distibuted (on top of the device file system) combination
may not work very well when paired. So a variety of testing is
something I'm researching. Eliminiation of either file system
listed below, due to Gentoo User Experience is most welcome

information,

as well as tips and tricks to setting up any file system.


Hi James,

Have you found this document :

http://hal.inria.fr/hal-00789086/PDF/a_survey_of_dfs.pdf

On a related matter, I'd like to host my own file server on a dedicated
box so that I can access my working files from serveral locations. I'd
like it to be fast and secure, and I don't mind if the files are
replicated on each workstation. What would be the better tools for this
?


AFS has caching and can survive temporary disappearance of the server.

For me, I need to be able to provide Samba filesharing on top of that layer on 
2 different locations as I don't see the network bandwidth to be sufficient for 
normal operations. (ADSL uplinks tend to be dead slow)


You might try GlusterFS with two replicating bricks. The latest version 
of Samba in portage includes a VFS plugin that can integrate GlusterFS 
volumes via GFAPI.


--Kerin



Re: [gentoo-user] Re: File system testing

2014-09-18 Thread Kerin Millar

On 17/09/2014 21:20, Alec Ten Harmsel wrote:

As far as HDFS goes, I would only set that up if you will use it for
Hadoop or related tools. It's highly specific, and the performance is
not good unless you're doing a massively parallel read (what it was
designed for). I can elaborate why if anyone is actually interested.


I, for one, am very interested.

--Kerin



Re: [gentoo-user] OOM memory issues

2014-09-18 Thread Kerin Millar

On 18/09/2014 16:48, James wrote:

Hello,

Out Of Memory seems to invoke mysterious processes that kill
such offending processes. OOM seems to be a common problem
that pops up over and over again within the clustering communities.


I would greatly appreciate (gentoo) illuminations on the OOM issues;
both historically and for folks using/testing systemd. Not a flame_a_thon,
just some technical information, as I need to understand these
issues more deeply, how to find, measure and configure around OOM issues,
in my quest for gentoo clustering.


The need for the OOM killer stems from the fact that memory can be 
overcommitted. These articles may prove informative:


http://lwn.net/Articles/317814/
http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html

In my case, the most likely trigger - as rare as it is - would be a 
runaway process that consumes more than its fair share of RAM. 
Therefore, I make a point of adjusting the score of production-critical 
applications to ensure that they are less likely to be culled.


If your cases are not pathological, you could increase the amount of 
memory, be it by additional RAM or additional swap [1]. Alternatively, 
if you are able to precisely control the way in which memory is 
allocated and can guarantee that it will not be exhausted, you may elect 
to disable overcommit, though I would not recommend it.


With NUMA, things may be more complicated because there is the potential 
for a particular memory node to be exhausted, unless memory interleaving 
is employed. Indeed, I make a point of using interleaving for MySQL, 
having gotten the idea from the Twitter fork.


Finally, make sure you are using at least Linux 3.12, because some 
improvements have been made there [2].


--Kerin

[1] At a pinch, additional swap may be allocated as a file
[2] https://lwn.net/Articles/562211/#oom



Re: [gentoo-user] crontab - and' condition

2014-09-18 Thread Kerin Millar

On 18/09/2014 17:44, Joseph wrote:

I want to run a cron job only once a month.  The problem is the computer
is only on on weekdays Mon-Fri. 1-5

cron tab as this below is an or condition as it has entries in Days of
the Months and Day of the  Week

5 18 1 * 2  rsync -av ...

so it will run on days 1 or Tuesday of each months.

Is it possible to create and condition, eg. run it on Tuesday between
days 1 to 7; depend on which day Tuesday falls on?


You can place a script in /etc/cron.monthly. The run-crons script is 
scheduled to execute every 10 minutes, and it will ensure that your 
script is executed on a monthly schedule. This is true of both cronie 
and vixie-cron.


--Kerin



Re: [gentoo-user] Re: OOM memory issues

2014-09-18 Thread Kerin Millar

On 18/09/2014 19:27, James wrote:

Kerin Millar kerframil at fastmail.co.uk writes:



The need for the OOM killer stems from the fact that memory can be
overcommitted. These articles may prove informative:



http://lwn.net/Articles/317814/


Yea I saw this article.  Its dated February 4, 2009. How much has
changed with the kernel/configs/userspace mechanism? Nothing, everything?


A new tunable, oom_score_adj, was added, which accepts values between 
0 and 1000.


https://github.com/torvalds/linux/commit/a63d83f#include/linux/oom.h

As mentioned there, the oom_adj tunable remains for reasons of 
backward compatibility. Setting one will adjust the other per the 
appropriate scale.


It doesn't look as though Karthikesan's proposal for a cgroup based 
controller was ever accepted.


--Kerin



Re: [gentoo-user] Re: unix2dos blocks dos2unix

2014-09-16 Thread Kerin Millar

On 16/09/2014 15:16, Nikos Chantziaras wrote:

On 14/09/14 10:24, Gevisz wrote:

I have just installed unix2dos utility
(never had a need to use it before) and
just after that tried to install dos2unix
but the installation of dos2unix failed
complaining on the fact that

app-text/unix2dos is blocking app-text/dos2unix-6.0.5

I find it very strange as I think that if someone
needs unix2dos utility he usually also needs dos2unix
utility, especially taking into account that unix2dos
by default overwrites its input file.


There's also app-text/tofrodos. It installs two binaries, todos and
fromdos.  It solved some problems the other tools had (unix2dos and
dos2unix), but it's been long ago and I don't remember which problems
they were...


Alternatively:

cd /usr/local/bin; echo dos2unix unix2dos | xargs -n1 ln -s /bin/busybox

--Kerin



Re: [gentoo-user] duplicate HD drives

2014-09-13 Thread Kerin Millar

On 13/09/2014 04:17, Joseph wrote:

On 09/12/14 23:52, Neil Bothwick wrote:

On Fri, 12 Sep 2014 15:53:19 -0600, Joseph wrote:


I have two identical HD in a box and want to duplicate sda to sdb
I want sdb to be bootable just in case something happens to sda so I
can swap the drives and boot.

Do I boot from USB and run:
dd if=/dev/sda of=/dev/sdb bs=512 count=1


If you remove the cunt argument as already mentioned, this will copy the
whole drive, but it will be incredibly slow unless you add bs=4k. It also
only copies it once, as soon as you start using sda, sdb will be out of
date. Set up a RAID-1 array with the two drives, then install GRUB to the
boot sector of each drive, using grub2-install and you will always be
able to boot in the event of a failure of either drive.


--
Neil Bothwick


I'll be interested in setting up RAID-1. Is it hard?
I've never done it and I know there is plenty of information on line
about RAID-1

I'm not going to grub2 anytime soon.  This machine has BIOS and the HD
has MBR partition.
With recent problem I had with my other older box (that has BIOS) and
grub2 I'm not going to play with it.

Is it hard to set it UP RAID-1


No, it is not. However, to keep things simple, observe the following:

  * create the array with the --metadata=0 option (using mdadm)
  * mark the partitions belonging to the array as type FD
  * enable CONFIG_MD_AUTODETECT in the kernel

Doing so will ensure two things. Firstly, that the legacy version of 
grub is able to read the kernel. Unlike grub2, it does not intrinsically 
understand RAID. Using the original metadata format prevents that from 
being an issue; grub can be pointed to just one of the two partitions 
belonging to a RAID-1 array and read its filesystem.


Secondly, using the original metadata format means that, once the kernel 
has loaded, it is able to assemble the array by itself. Therefore, you 
may have your root filesystem on the array and mount it without having 
to use an initramfs.


In terms of partitioning, you could just create one big partition on 
each drive, join them into an array, and make that the basis of a root 
filesystem. As much as Gentoo has enshrined the concept, a dedicated 
boot filesystem is simply not necessary and swap can be created as a 
file. Alternatively, you could follow the handbook style and create 
three arrays for boot, swap and root.


There is a trick to achieving bootloader redundancy. Let's say that you 
have set up array /dev/md0, with /dev/sda1 and /dev/sdb1 as its members, 
and that /dev/md0 contains a singular root filesystem. In the grub 
shell, one would run these commands:


  grub device (hd0) /dev/sda
  grub root (hd0,0)
  grub setup (hd0)
  grub device (hd0) /dev/sdb
  grub root (hd0,0)
  grub setup (hd0)

The magic here is that the bootloader will still be able to function, 
even if a disk is removed or broken.


Finally, even though your disks are not exactly the same size, it does 
not matter. If there is a discrepancy among the devices that mdadm is 
given to create an array with, it will size the array according to the 
lowest common denominator. If you prefer, you can manually ensure that 
the partitions are the exact same size on both disks.


--Kerin



Re: [gentoo-user] duplicate HD drives

2014-09-13 Thread Kerin Millar

On 13/09/2014 17:45, Alan McKinnon wrote:

snip


If I do:
fdisk /dev/sda
t 1 fd

Won't it destroy data on /dev/sda?



No.


Although mdadm will. A simple solution is to create the array with only 
the second disk as the initial member and designate the other device as 
literally missing. The array will function in degraded mode. Then it 
is simply a matter of copying over the data from the original filesystem 
on the first disk, after which it may be (destructively) added to the array.


--Kerin



Re: [gentoo-user] BACKUPS

2014-09-10 Thread Kerin Millar

On 10/09/2014 06:32, Joseph wrote:

On 09/10/14 06:10, Kerin Millar wrote:


snip


Thank you again. On a different subject.  Do you have a good pointer on
how to backup a system.
I just had a HD crash so I selected a replacement SSD and I'm re
installing the software.
I had backup of /etc/ and /home but I've missed all other settings eg:
/boot/ kernel config and other files and are not in /etc directory like:
hylafax setting etc.

Is there a way to keep backup of all those configuration files that are
manually edited?


As suggested by Neil, please begin a new thread.

--Kerin




Re: [gentoo-user] can not compile / emerge

2014-09-09 Thread Kerin Millar

On 09/09/2014 19:36, Joseph wrote:

I was installing an application gimp and all of a sudden I got an error:


Emerging (7 of 8) media-gfx/gimp-2.8.10-r1

* gimp-2.8.10.tar.bz2 SHA256 SHA512 WHIRLPOOL size ;-)
...
[ ok ]
* gimp-2.8.10-freetype251.patch SHA256 SHA512 WHIRLPOOL size ;-)
...
[ ok ]

cfg-update-1.8.2-r1: Creating checksum index...
Unpacking source...
Unpacking gimp-2.8.10.tar.bz2 to
/var/tmp/portage/media-gfx/gimp-2.8.10-r1/work
Unpacking gimp-2.8.10-freetype251.patch to
/var/tmp/portage/media-gfx/gimp-2.8.10-r1/work

unpack gimp-2.8.10-freetype251.patch: file format not recognized. Ignoring.

Source unpacked in /var/tmp/portage/media-gfx/gimp-2.8.10-r1/work
Preparing source in
/var/tmp/portage/media-gfx/gimp-2.8.10-r1/work/gimp-2.8.10 ...

* Applying gimp-2.7.4-no-deprecation.patch
...
[ ok ]
* Applying gimp-2.8.10-freetype251.patch
...
[ ok ]
* Applying gimp-2.8.10-clang.patch
...
[ ok ]
* Running eautoreconf in
'/var/tmp/portage/media-gfx/gimp-2.8.10-r1/work/gimp-2.8.10' ...
* Running glib-gettextize --copy --force
...
[ ok ]
* Running intltoolize --automake --copy --force
...
[ ok ]
* Skipping 'gtkdocize --copy' due gtkdocize not installed
* Running libtoolize --install --copy --force --automake
...
[ ok ]
* Running aclocal -I m4macros
...
[ ok ]
* Running autoconf
...
[ ok ]
* Running autoheader
...
[ ok ]
* Running automake --add-missing --copy --force-missing
...
[ ok ]
* Running elibtoolize in: gimp-2.8.10/
*   Applying portage/1.2.0 patch ...
*   Applying sed/1.5.6 patch ...
*   Applying as-needed/2.4.2 patch ...
*   Applying target-nm/2.4.2 patch ...
* Fixing OMF Makefiles
...
[ ok ]
* Disabling deprecation warnings
...
[ ok ]

Source prepared.
Configuring source in
/var/tmp/portage/media-gfx/gimp-2.8.10-r1/work/gimp-2.8.10 ...

* econf: updating gimp-2.8.10/config.sub with
/usr/share/gnuconfig/config.sub
* econf: updating gimp-2.8.10/config.guess with
/usr/share/gnuconfig/config.guess
./configure --prefix=/usr --build=x86_64-pc-linux-gnu
--host=x86_64-pc-linux-gnu --mandir=/usr/share/man
--infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc
--localstatedir=/var/lib --libdir=/usr/lib64 --disable-silent-rules
--disable-dependency-tracking --docdir=/usr/share/doc/gimp-2.8.10-r1
--disable-maintainer-mode --disable-gtk-doc --enable-default-binary
--disable-silent-rules --with-x --without-aa --with-alsa
--disable-altivec --with-bzip2 --without-libcurl --with-dbus
--without-gvfs --without-webkit --with-libjpeg --without-libjasper
--with-libexif --with-lcms=lcms2 --without-gs --enable-mmx --with-libmng
--with-poppler --with-libpng --disable-python --disable-mp --enable-sse
--with-librsvg --with-libtiff --with-gudev --without-wmf --with-xmc
--without-libxpm --without-xvfb-run
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether make supports nested variables... (cached) yes
checking for x86_64-pc-linux-gnu-gcc... x86_64-pc-linux-gnu-gcc
checking whether the C compiler works... no
configure: error: in
`/var/tmp/portage/media-gfx/gimp-2.8.10-r1/work/gimp-2.8.10':
configure: error: C compiler cannot create executables
See `config.log' for more details


Now, emerge / equery  will not even show up on a command line.
Most of the time I'm getting an error:
error while loading shared libraries: libstdc++.so.6: cannot open shared
object file: No such file or directory

Running on my other system I get:
equery b libstdc++.so.6
* Searching for libstdc++.so.6 ... sys-devel/gcc-4.5.4
(/usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libstdc++.so.6 -
libstdc++.so.6.0.14)

env-update - doesn't work either



Check beneath /etc/env.d/ld.so.conf.d and ensure that there is a file 
defining the appropriate paths for your current version of gcc. Here's 
how it looks on my system:


  # cd /etc/ld.so.conf.d
  # ls
  05binutils.conf  05gcc-x86_64-pc-linux-gnu.conf
  # cat 05gcc-x86_64-pc-linux-gnu.conf
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

Once you have made any necessary changes, run ldconfig.

--Kerin



Re: [gentoo-user] can not compile / emerge

2014-09-09 Thread Kerin Millar

On 10/09/2014 04:21, Joseph wrote:

On 09/10/14 03:59, Kerin Millar wrote:

On 09/09/2014 19:36, Joseph wrote:

[snip]



Running on my other system I get:
equery b libstdc++.so.6
* Searching for libstdc++.so.6 ... sys-devel/gcc-4.5.4
(/usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libstdc++.so.6 -
libstdc++.so.6.0.14)

env-update - doesn't work either



Check beneath /etc/env.d/ld.so.conf.d and ensure that there is a file
defining the appropriate paths for your current version of gcc. Here's
how it looks on my system:

  # cd /etc/ld.so.conf.d
  # ls
  05binutils.conf  05gcc-x86_64-pc-linux-gnu.conf
  # cat 05gcc-x86_64-pc-linux-gnu.conf
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

Once you have made any necessary changes, run ldconfig.

--Kerin


Thanks Kerin, for the pointer.
I think I have a bigger problem, and don't know how to fix it.

Yes, I have the same file /etc/ld.so.conf.d
# ls # 05gcc-x86_64-pc-linux-gnu.conf
# cat /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
/usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

However, those directories are empty (only one file):
# ls -al /usr/lib/
libbrcomplpr2.so


Is /usr/lib an actual directory or a symlink? Assuming that you use a 
stock amd64 (multilib) profile, it should be a symlink to lib64. If you 
find that it is a directory and that you also have a lib64 directory, 
try the commands below. You can skip the busybox and exit commands if 
you are doing this in a chroot rather than on a live system.


  # busybox sh
  # cd /usr/
  # mv lib lib.old
  # ln -s lib64 lib
  # exit


On my other working system this directory /usr/lib/ contain about 2020
files.
What had happened?
After emerging some files and system I was running command: fstrim -v /
(as the disk is SSD).
Could it have something to do with the fact that these directories are
empty?


No. Using fstrim does not delete files.

--Kerin



Re: [gentoo-user] can not compile / emerge

2014-09-09 Thread Kerin Millar

On 09/09/2014 22:38, Mick wrote:

On Tuesday 09 Sep 2014 20:15:09 Joseph wrote:

On 09/09/14 14:46, Todd Goodman wrote:

* Joseph syscon...@gmail.com [140909 14:37]:

I was installing an application gimp and all of a sudden I got an error:

[SNIP]


configure: error: in
`/var/tmp/portage/media-gfx/gimp-2.8.10-r1/work/gimp-2.8.10':
configure: error: C compiler cannot create executables
See `config.log' for more details


What does 'gcc-config -l' show?


I tried to set blindly gcc to active one: gcc-config 1 but I get the same
error

  * gcc-config: Could not get portage CHOST!
  * gcc-config: You should verify that CHOST is set in one of these places:
  * gcc-config:  - //etc/portage/make.conf
  * gcc-config:  - active environment

It is a new installation on SSD and it is broken.
I can't proceed with gcc upgrade/setting, in fact my system is currently
broken.


You do not have an /etc/portage/make.conf file, or you have not configured the
default with the correct settings?


This is not pertinent to the matter at hand. Even if make.conf cannot be 
sourced, CHOST will be sourced from the profile and portage is perfectly 
capable of functioning.


There is no need for a user to define CHOST. At best, it is a no-op and, 
at worst, the user may screw up and define it in such a way that it is 
at odds with the profile.


The error message from gcc-config isn't particularly helpful because it 
falsely implies that make.conf or the active environment - as opposed 
to portage's environment - are the only valid sources.


--Kerin



Re: [gentoo-user] [SOLVED] can not compile / emerge

2014-09-09 Thread Kerin Millar

On 10/09/2014 04:50, Joseph wrote:

On 09/10/14 04:27, Kerin Millar wrote:

On 10/09/2014 04:21, Joseph wrote:

On 09/10/14 03:59, Kerin Millar wrote:

On 09/09/2014 19:36, Joseph wrote:

[snip]



Running on my other system I get:
equery b libstdc++.so.6
* Searching for libstdc++.so.6 ... sys-devel/gcc-4.5.4
(/usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libstdc++.so.6 -
libstdc++.so.6.0.14)

env-update - doesn't work either



Check beneath /etc/env.d/ld.so.conf.d and ensure that there is a file
defining the appropriate paths for your current version of gcc. Here's
how it looks on my system:

  # cd /etc/ld.so.conf.d
  # ls
  05binutils.conf  05gcc-x86_64-pc-linux-gnu.conf
  # cat 05gcc-x86_64-pc-linux-gnu.conf
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

Once you have made any necessary changes, run ldconfig.

--Kerin


Thanks Kerin, for the pointer.
I think I have a bigger problem, and don't know how to fix it.

Yes, I have the same file /etc/ld.so.conf.d
# ls # 05gcc-x86_64-pc-linux-gnu.conf
# cat /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
/usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

However, those directories are empty (only one file):
# ls -al /usr/lib/
libbrcomplpr2.so


Is /usr/lib an actual directory or a symlink? Assuming that you use a
stock amd64 (multilib) profile, it should be a symlink to lib64. If you
find that it is a directory and that you also have a lib64 directory,
try the commands below. You can skip the busybox and exit commands if
you are doing this in a chroot rather than on a live system.

  # busybox sh
  # cd /usr/
  # mv lib lib.old
  # ln -s lib64 lib
  # exit


On my other working system this directory /usr/lib/ contain about 2020
files.
What had happened?
After emerging some files and system I was running command: fstrim -v /
(as the disk is SSD).
Could it have something to do with the fact that these directories are
empty?


No. Using fstrim does not delete files.

--Kerin


Kerin you are a magician! THANK YOU!!!
Yes, it worked.  Everything is back to normal.

I can still not comprehend what had happened :-/ why all of a sudden in
the middle of compilation it all vanished.


Were you doing anything outside of portage that may have had a hand in it?

Incidentally, you should move libbrcomplpr2.so to /usr/lib32. Some 
googling suggests to me that it is a library included in a proprietary 
Brother printer driver package. You can use the file command to confirm 
that it is a 32-bit library.


--Kerin



Re: [gentoo-user] [SOLVED] can not compile / emerge

2014-09-09 Thread Kerin Millar

On 10/09/2014 05:16, Joseph wrote:

On 09/10/14 04:57, Kerin Millar wrote:

On 10/09/2014 04:50, Joseph wrote:

On 09/10/14 04:27, Kerin Millar wrote:

On 10/09/2014 04:21, Joseph wrote:

On 09/10/14 03:59, Kerin Millar wrote:

On 09/09/2014 19:36, Joseph wrote:

[snip]



Running on my other system I get:
equery b libstdc++.so.6
* Searching for libstdc++.so.6 ... sys-devel/gcc-4.5.4
(/usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libstdc++.so.6 -
libstdc++.so.6.0.14)

env-update - doesn't work either



Check beneath /etc/env.d/ld.so.conf.d and ensure that there is a file
defining the appropriate paths for your current version of gcc.
Here's
how it looks on my system:

  # cd /etc/ld.so.conf.d
  # ls
  05binutils.conf  05gcc-x86_64-pc-linux-gnu.conf
  # cat 05gcc-x86_64-pc-linux-gnu.conf
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

Once you have made any necessary changes, run ldconfig.

--Kerin


Thanks Kerin, for the pointer.
I think I have a bigger problem, and don't know how to fix it.

Yes, I have the same file /etc/ld.so.conf.d
# ls # 05gcc-x86_64-pc-linux-gnu.conf
# cat /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
/usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

However, those directories are empty (only one file):
# ls -al /usr/lib/
libbrcomplpr2.so


Is /usr/lib an actual directory or a symlink? Assuming that you use a
stock amd64 (multilib) profile, it should be a symlink to lib64. If you
find that it is a directory and that you also have a lib64 directory,
try the commands below. You can skip the busybox and exit commands if
you are doing this in a chroot rather than on a live system.

  # busybox sh
  # cd /usr/
  # mv lib lib.old
  # ln -s lib64 lib
  # exit


On my other working system this directory /usr/lib/ contain about
2020
files.
What had happened?
After emerging some files and system I was running command: fstrim
-v /
(as the disk is SSD).
Could it have something to do with the fact that these directories are
empty?


No. Using fstrim does not delete files.

--Kerin


Kerin you are a magician! THANK YOU!!!
Yes, it worked.  Everything is back to normal.

I can still not comprehend what had happened :-/ why all of a sudden in
the middle of compilation it all vanished.


Were you doing anything outside of portage that may have had a hand in
it?

Incidentally, you should move libbrcomplpr2.so to /usr/lib32. Some
googling suggests to me that it is a library included in a proprietary
Brother printer driver package. You can use the file command to confirm
that it is a 32-bit library.

--Kerin


I was logged in over ssh in one terminal, compiling xsane
and logged in, in another terminal and was installing brother printer
driver (without emerge) manual installation.
I followed my own instructions from:
http://forums.gentoo.org/viewtopic-t-909052-highlight-brother.html?sid=1ba0b92db499262c6a74919d86c6af43


I run: tar zxvf ./hl5370dwlpr-2.0.3-1.i386.tar.gz -C / tar zxvf
./cupswrapperHL5370DW-2.0.4-1.i386.tar.gz -C /
Could be that one of this script messed up the links.
If so I don't know how could it happen. Looking though history these
are the commands I run:

305  tar zxvf ./brhl5250dnlpr-2.0.1-1.i386.tar.gz -C /
  306  tar zxvf ./cupswrapperHL5250DN-2.0.1-1.i386.tar.gz -C /
  307  cd /usr/local/Brother/cupswrapper
  308  mv cupswrapperHL5250DN-2.0.1 cupswrapperHL5250DN-2.0.1.bak
  309  /bin/sed 's/\/etc\/init.d\/cups\ restart/\/etc\/init.d\/cupsd\
restart/g' ./cupswrapperHL5250DN-2.0.1.bak  ./cupswrapperHL5250DN-2.0.1
  310  ls -al
  311  pwd
  312  ll
  313  ls -al
  314  chmod 755 cupswrapperHL5250DN-2.0.1

I just extracted the files with tar...


I read your forum post and can see that you're (dangerously) extracting 
directly into the root directory and that this is among the contents of 
the archive:


  ./usr/lib/
  ./usr/lib/libbrcomplpr2.so

I posit that tar clobbers the /usr/lib symlink, converting it into a 
directory because that is what is stored in the archive.


Ergo, use the --keep-directory-symlink parameter.

--Kerin



Re: [gentoo-user] [SOLVED] can not compile / emerge

2014-09-09 Thread Kerin Millar

On 10/09/2014 06:01, Kerin Millar wrote:

On 10/09/2014 05:16, Joseph wrote:

On 09/10/14 04:57, Kerin Millar wrote:

On 10/09/2014 04:50, Joseph wrote:

On 09/10/14 04:27, Kerin Millar wrote:

On 10/09/2014 04:21, Joseph wrote:

On 09/10/14 03:59, Kerin Millar wrote:

On 09/09/2014 19:36, Joseph wrote:

[snip]



Running on my other system I get:
equery b libstdc++.so.6
* Searching for libstdc++.so.6 ... sys-devel/gcc-4.5.4
(/usr/lib/gcc/x86_64-pc-linux-gnu/4.5.4/libstdc++.so.6 -
libstdc++.so.6.0.14)

env-update - doesn't work either



Check beneath /etc/env.d/ld.so.conf.d and ensure that there is a
file
defining the appropriate paths for your current version of gcc.
Here's
how it looks on my system:

  # cd /etc/ld.so.conf.d
  # ls
  05binutils.conf  05gcc-x86_64-pc-linux-gnu.conf
  # cat 05gcc-x86_64-pc-linux-gnu.conf
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
  /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

Once you have made any necessary changes, run ldconfig.

--Kerin


Thanks Kerin, for the pointer.
I think I have a bigger problem, and don't know how to fix it.

Yes, I have the same file /etc/ld.so.conf.d
# ls # 05gcc-x86_64-pc-linux-gnu.conf
# cat /usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3/32
/usr/lib/gcc/x86_64-pc-linux-gnu/4.7.3

However, those directories are empty (only one file):
# ls -al /usr/lib/
libbrcomplpr2.so


Is /usr/lib an actual directory or a symlink? Assuming that you use a
stock amd64 (multilib) profile, it should be a symlink to lib64. If
you
find that it is a directory and that you also have a lib64 directory,
try the commands below. You can skip the busybox and exit commands if
you are doing this in a chroot rather than on a live system.

  # busybox sh
  # cd /usr/
  # mv lib lib.old
  # ln -s lib64 lib
  # exit


On my other working system this directory /usr/lib/ contain about
2020
files.
What had happened?
After emerging some files and system I was running command: fstrim
-v /
(as the disk is SSD).
Could it have something to do with the fact that these directories
are
empty?


No. Using fstrim does not delete files.

--Kerin


Kerin you are a magician! THANK YOU!!!
Yes, it worked.  Everything is back to normal.

I can still not comprehend what had happened :-/ why all of a sudden in
the middle of compilation it all vanished.


Were you doing anything outside of portage that may have had a hand in
it?

Incidentally, you should move libbrcomplpr2.so to /usr/lib32. Some
googling suggests to me that it is a library included in a proprietary
Brother printer driver package. You can use the file command to confirm
that it is a 32-bit library.

--Kerin


I was logged in over ssh in one terminal, compiling xsane
and logged in, in another terminal and was installing brother printer
driver (without emerge) manual installation.
I followed my own instructions from:
http://forums.gentoo.org/viewtopic-t-909052-highlight-brother.html?sid=1ba0b92db499262c6a74919d86c6af43



I run: tar zxvf ./hl5370dwlpr-2.0.3-1.i386.tar.gz -C / tar zxvf
./cupswrapperHL5370DW-2.0.4-1.i386.tar.gz -C /
Could be that one of this script messed up the links.
If so I don't know how could it happen. Looking though history these
are the commands I run:

305  tar zxvf ./brhl5250dnlpr-2.0.1-1.i386.tar.gz -C /
  306  tar zxvf ./cupswrapperHL5250DN-2.0.1-1.i386.tar.gz -C /
  307  cd /usr/local/Brother/cupswrapper
  308  mv cupswrapperHL5250DN-2.0.1 cupswrapperHL5250DN-2.0.1.bak
  309  /bin/sed 's/\/etc\/init.d\/cups\ restart/\/etc\/init.d\/cupsd\
restart/g' ./cupswrapperHL5250DN-2.0.1.bak  ./cupswrapperHL5250DN-2.0.1
  310  ls -al
  311  pwd
  312  ll
  313  ls -al
  314  chmod 755 cupswrapperHL5250DN-2.0.1

I just extracted the files with tar...


I read your forum post and can see that you're (dangerously) extracting
directly into the root directory and that this is among the contents of
the archive:

   ./usr/lib/
   ./usr/lib/libbrcomplpr2.so

I posit that tar clobbers the /usr/lib symlink, converting it into a
directory because that is what is stored in the archive.

Ergo, use the --keep-directory-symlink parameter.


Excuse the fact that I am replying to myself, but I must also stress 
that the library does not belong in lib64. On a 64-bit system, you 
should adapt your process so that the library ends up residing in lib32, 
not lib64 (by way of the lib symlink). The software will not be able to 
function correctly otherwise.


--Kerin



Re: [gentoo-user] MBR partition

2014-09-06 Thread Kerin Millar

On 06/09/2014 04:10, Joseph wrote:

On 09/05/14 21:02, Joseph wrote:

I'm configuring MBR partition for older disk and need to know what
code to enter for boot partition.
My BIOS is not EFI type.


Not that it particularly matters but a partition dedicated to /boot 
contains a Linux filesystem and, thus, 83 is appropriate.




My current configuration:
fdisk -l /dev/sda

Disk /dev/sda: 447.1 GiB, 480103981056 bytes, 937703088 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x021589e5

DeviceBoot Start   EndBlocks  Id System
/dev/sda1 * 2048155647 76800  83 Linux
/dev/sda2 155648   4349951   2097152  82 Linux swap / Solaris
/dev/sda34349952 937703087 466676568  83 Linux

Does the sda1 has to start with 1 or 2048?


As of util-linux-2.18, partitions are aligned to 1 MiB boundaries by 
default, so as to avoid performance degradation on SSDs and advanced 
format drives [1].


Further, beginning at 2048 as opposed to 63 (in the manner of MS-DOS) 
provides more room for boot loaders such as grub to embed themselves.


To have the first sector be a partition boundary is impossible because 
that is the location of the MBR and the partition table.


In summary, let it be.

--Kerin

[1] https://bugs.gentoo.org/show_bug.cgi?id=304727
[2] https://en.wikipedia.org/wiki/Master_boot_record



Re: [gentoo-user] Re: MBR partition

2014-09-06 Thread Kerin Millar

On 06/09/2014 13:54, Alan McKinnon wrote:

On 06/09/2014 14:48, Dale wrote:

James wrote:

Joseph syscon780 at gmail.com writes:


Thank you for the information.
I'll continue on Monday and let you know.  If it will not boot with sector

starting at 2048, I will

re-partition /boot sda1 to start at 63.


Take some time to research and reflect on your needs (desires?)
about which file system to use. (ext 2,4) is always popular and safe.
Some are very happy with BTRFS and there are many other interesting
choices (ZFS, XFS, etc etc)..

There is no best solution; but the EXT family offers tried and proven
options. YMMV.


hth,
James



I'm not sure if it is ZFS or XFS but I seem to recall one of those does
not like sudden shutdowns, such as a power failure.  Maybe that has
changed since I last tried whichever one it is that has that issue.  If
you have a UPS tho, shouldn't be so much of a problem, unless your power
supply goes out.


XFS.

It was designed by SGI for their video rendeing workstations back in the
day and used very aggressive caching to get enormous throughput. It was
also brilliant at dealing with directories containing thousands of small
files - not unusual when dealing with video editing.

However, it was also designed for environments where the power is
guaranteed to never go off (which explains why they decided to go with
such aggressive caching). If you use it in environments where powerouts
are not guaranteed to not happen, well..


Well what? It's no less reliable than other filesystems that employ 
delayed allocation (any modern filesystem worth of note). Over recent 
years, I use both XFS and ext4 extensively in production and have found 
the former trumps the latter in reliability.


While I like them both, I predicate this assertion mainly on some of the 
silly bugs that I have seen crop up in the ext4 codebase and the 
unedifying commentary that has occasionally ensued. From reading the XFS 
list and my own experience, I have formed the opinion that the 
maintainers are more stringent in matters of QA and regression testing 
and more mature in matters of public debate. I also believe that 
regressions in stability are virtually unheard of, whereas regressions 
in performance are identified quickly and taken very seriously [1].


The worst thing I could say about XFS is that it was comparatively slow 
until the introduction of delayed logging (an idea taken from ext3). [2] 
[3]. Nowadays, it is on a par with ext4 and, in some cases, scales 
better. It is also one of the few filesystems besides ZFS that can 
dynamically allocate inodes.






ZFS is the most resilient filesystem I've ever used, you can through the
bucket and kitchen sink at it and it really doesn't give a shit (it just
deals with it :-) )


While its design is intrinsically resilient - particularly its 
capability to protect against bitrot - I don't believe that ZFS on Linux 
is more reliable in practice than the filesystems included in the Linux 
kernel. Quite the contrary. Look at the issues labelled as Bug filed 
for both the SPL and ZFS projects. There are a considerable number of 
serious bugs that - to my mind - disqualify it for anything but hobbyist 
use and I take issue with the increasing tendency among the community to 
casually recommend it.


Here's my anecdotal experience of using it. My hosting company recently 
installed a dedicated backup server that was using ZFS on Linux. Its 
primary function was as an NFS server. It was very slow and repeatedly 
deadlocked under heavy load. On each occasion, the only remedy was for 
an engineer to perform a hard reboot. When I complained about it, I was 
told that they normally use FreeBSD but had opted for Linux because the 
former was not compatible with a fibre channel adapter that they needed 
to make use of. I then requested that the filesystem be changed to ext4, 
after which the server was rock solid.


Another experience I have is of helping someone resolve an issue where 
MySQL was not starting. It transpired that he was using ZFS and that it 
does not support native AIO. I supplied him with a workaround but 
sternly advised him to switch to a de-facto Linux filesystem if he 
valued his data and expected anything like decent performance from 
InnoDB. Speaking of which, XFS is a popular filesystem among 
knowledgeable MySQL hackers (such as Mark Callaghan) and DBAs alike.


For the time being, I think that there are other operating systems whose 
ZFS implementation is more robust.


--Kerin

[1] 
http://www.percona.com/blog/2012/03/15/ext4-vs-xfs-on-ssd/#comment-903938
[2] 
https://www.kernel.org/doc/Documentation/filesystems/xfs-delayed-logging-design.txt

[3] https://www.youtube.com/watch?v=FegjLbCnoBw



Re: [gentoo-user] Re: MBR partition

2014-09-06 Thread Kerin Millar

On 07/09/2014 01:28, Dale wrote:

Kerin Millar wrote:

On 06/09/2014 13:54, Alan McKinnon wrote:

On 06/09/2014 14:48, Dale wrote:

James wrote:

Joseph syscon780 at gmail.com writes:


Thank you for the information.
I'll continue on Monday and let you know.  If it will not boot
with sector

starting at 2048, I will

re-partition /boot sda1 to start at 63.


Take some time to research and reflect on your needs (desires?)
about which file system to use. (ext 2,4) is always popular and safe.
Some are very happy with BTRFS and there are many other interesting
choices (ZFS, XFS, etc etc)..

There is no best solution; but the EXT family offers tried and proven
options. YMMV.


hth,
James



I'm not sure if it is ZFS or XFS but I seem to recall one of those does
not like sudden shutdowns, such as a power failure.  Maybe that has
changed since I last tried whichever one it is that has that issue.  If
you have a UPS tho, shouldn't be so much of a problem, unless your
power
supply goes out.


XFS.

It was designed by SGI for their video rendeing workstations back in the
day and used very aggressive caching to get enormous throughput. It was
also brilliant at dealing with directories containing thousands of small
files - not unusual when dealing with video editing.

However, it was also designed for environments where the power is
guaranteed to never go off (which explains why they decided to go with
such aggressive caching). If you use it in environments where powerouts
are not guaranteed to not happen, well..


Well what? It's no less reliable than other filesystems that employ
delayed allocation (any modern filesystem worth of note). Over recent
years, I use both XFS and ext4 extensively in production and have
found the former trumps the latter in reliability.

While I like them both, I predicate this assertion mainly on some of
the silly bugs that I have seen crop up in the ext4 codebase and the
unedifying commentary that has occasionally ensued. From reading the
XFS list and my own experience, I have formed the opinion that the
maintainers are more stringent in matters of QA and regression testing
and more mature in matters of public debate. I also believe that
regressions in stability are virtually unheard of, whereas regressions
in performance are identified quickly and taken very seriously [1].

The worst thing I could say about XFS is that it was comparatively
slow until the introduction of delayed logging (an idea taken from
ext3). [2] [3]. Nowadays, it is on a par with ext4 and, in some cases,
scales better. It is also one of the few filesystems besides ZFS that
can dynamically allocate inodes.
SNIP
--Kerin

[1]
http://www.percona.com/blog/2012/03/15/ext4-vs-xfs-on-ssd/#comment-903938
[2]
https://www.kernel.org/doc/Documentation/filesystems/xfs-delayed-logging-design.txt
[3] https://www.youtube.com/watch?v=FegjLbCnoBw




The point I was making in my comment was about if the power fails
without a proper shutdown.  When I used it a long time ago, it worked
fine, until there was a sudden power loss.  That is when problems pop
up.  If a person has a UPS, should be good to go.


The point I was making is that there is not a shred of evidence to 
suggest that XFS is any less resilient in this scenario than newer 
filesystems employing delayed allocation such as ext4, btrfs and ZFS. 
What I take issue with is that people continue to single XFS out for 
criticism, regardless. Let XFS be judged as it it stands today, just as 
any other actively developed filesystem should be.


Filesystem implementations are not set in stone. Just as ext4 developers 
had to resolve certain engineering challenges raised by the use of 
delayed allocation, so have XFS developers had to do the same before 
them [1].


Arguments generally critical of the use of delayed allocation where 
power loss is a likely event would hold water. Fortunately, options 
remain for such a scenario (ext3, ext4 + nodelalloc).


--Kerin

[1] 
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7d4fb40




Re: [gentoo-user] trouble overriding a command name

2014-08-28 Thread Kerin Millar

On 28/08/2014 16:42, gottl...@nyu.edu wrote:

I know this is trivial and apologize in advance for what must be a simple
(non-gentoo) error on my part.

In /home/gottlieb/bin/dia I have the following
 #!/bin/bash
 /usr/bin/dia --integrated

ls -l /home/gottlieb/bin/dia gives
 -rwxr-xr-x 1 gottlieb gottlieb 38 Aug 28 11:28 /home/gottlieb/bin/dia

echo $PATH gives
 
/home/gottlieb/bin/:/usr/local/bin:/usr/bin:/bin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.8.2:/usr/games/bin

which dia gives
 /home/gottlieb/bin/dia

`which dia`
 correctly starts dia in integrated (one window) mode

/home/gottlieb/bin/dia
 correctly starts in integrated mode

/usr/bin/dia
 correctly starts dia in the default (two window) mode

BUT plain
dia
 incorrectly starts dia in the default mode.



You might need to run hash -r in your current shell. If that doesn't 
help then ensure that there is no alias. Incidentally, I would suggest 
that you write this in your script:


  exec /usr/bin/dia --integrated

Refer to `help exec` for the reasoning.

--Kerin



Re: [gentoo-user] Software RAID-1

2014-08-26 Thread Kerin Millar

On 26/08/2014 10:38, Peter Humphrey wrote:

On Monday 25 August 2014 18:46:23 Kerin Millar wrote:

On 25/08/2014 17:51, Peter Humphrey wrote:

On Monday 25 August 2014 13:35:11 Kerin Millar wrote:

I now wonder if this is a race condition between the init script running
`mdadm -As` and the fact that the mdadm package installs udev rules that
allow for automatic incremental assembly?


Isn't it just that, with the kernel auto-assembly of the root partition,
and udev rules having assembled the rest, all the work's been done by the
time the mdraid init script is called? I had wondered about the time that
udev startup takes; assembling the raids would account for it.


Yes, it's a possibility and would constitute a race condition - even
though it might ultimately be a harmless one.


I thought a race involved the competitors setting off at more-or-less the same
time, not one waiting until the other had finished. No matter.


The mdraid script can assemble arrays and runs at a particular point in 
the boot sequence. The udev rules can also assemble arrays and, being 
event-driven, I suspect that they are likely to prevail. The point is 
that both the sequence and timing of these two mechanisms is not 
deterministic. There is definitely the potential for a race condition. I 
just don't yet know whether it is a harmful race condition.





As touched upon in the preceding post, I'd really like to know why mdadm
sees fit to return a non-zero exit code given that the arrays are actually
assembled successfully.


I can see why a dev might think I haven't managed to do my job here.


It may be that mdadm returns different non-zero exit codes depending on 
the exact circumstances. It does have this characteristic for certain 
other operations (such as -t --detail).





After all, even if the arrays are assembled at the point that mdadm is
executed by the mdraid init script, partially or fully, it surely ought
not to matter. As long as the arrays are fully assembled by the time
mdadm exits, it should return 0 to signify success. Nothing else makes
sense, in my opinion. It's absurd that the mdraid script is drawn into
printing a blank error message where nothing has gone wrong.


I agree, that is absurd.


Further, the mdadm ebuild still prints elog messages stating that mdraid
is a requirement for the boot runlevel but, with udev rules, I don't see
how that can be true. With udev being event-driven and calling mdadm
upon the introduction of a new device, the array should be up and
running as of the very moment that all the disks are seen, no matter
whether the mdraid init script is executed or not.


We agree again. The question is what to do about it. Maybe a bug report
against mdadm?


Definitely. Again, can you find out what the exit status is under the 
circumstances that mdadm produces a blank error? I am hoping it is 
something other than 1. If so, solving this problem might be as simple 
as having the mdraid script consider only a specific non-zero value to 
indicate an intractable error.


There is also the matter of whether it makes sense to explicitly 
assemble the arrays in the script where udev rules are already doing the 
job. However, I think this would require further investigation before 
considering making a bug of it.




---8


Right. Here's the position:
1.  I've left /etc/init.d/mdraid out of all run levels. I have nothing but
comments in mdadm.conf, but then it's not likely to be read anyway if 
the
init script isn't running.
2.  I have empty /etc/udev rules files as above.
3.  I have kernel auto-assembly of raid enabled.
4.  I don't use an init ram disk.
5.  The root partition is on /dev/md5 (0.99 metadata)
6.  All other partitions except /boot are under /dev/vg7 which is built on
top of /dev/md7 (1.x metadata).
7.  The system boots normally.


I must confess that this boggles my mind. Under these circumstances, I
cannot fathom how - or when - the 1.x arrays are being assembled.
Something has to be executing mdadm at some point.


I think it's udev. I had a look at the rules, but I no grok. I do see
references to mdadm though.


So would I, only you said in step 2 that you have empty rules, which I 
take to mean that you had overridden the mdadm-provided udev rules with 
empty files. If all of the conditions you describe were true, you would 
have eliminated all three of the aformentioned contexts in which mdadm 
can be invoked. Given that mdadm is needed to assemble your 1.x arrays 
(see below), I would expect such conditions to result in mount errors on 
account of the missing arrays.





Do I even need sys-fs/mdadm installed? Maybe
I'll try removing it. I have a little rescue system in the same box, so
it'd be easy to put it back if necessary.


Yes, you need mdadm because 1.x metadata arrays must be assembled in
userspace.


I realised after writing that that I may well need it for maintenance. I'd do
that from my rescue system though, which does have it installed

Re: [gentoo-user] Software RAID-1

2014-08-26 Thread Kerin Millar

On 26/08/2014 15:54, Peter Humphrey wrote:

On Tuesday 26 August 2014 14:21:19 Kerin Millar wrote:

On 26/08/2014 10:38, Peter Humphrey wrote:

On Monday 25 August 2014 18:46:23 Kerin Millar wrote:

On 25/08/2014 17:51, Peter Humphrey wrote:

On Monday 25 August 2014 13:35:11 Kerin Millar wrote:

---8

Again, can you find out what the exit status is under the circumstances that
mdadm produces a blank error? I am hoping it is something other than 1.


I've remerged mdadm to run this test. I'll report the result in a moment.
[...] In fact it returned status 1. Sorry to disappoint :)


Thanks for testing. Can you tell me exactly what /etc/mdadm.conf 
contained at the time?





Here's the position:
1.  I've left /etc/init.d/mdraid out of all run levels. I have nothing
but comments in mdadm.conf, but then it's not likely to be read anyway
if the init script isn't running.
2.  I have empty /etc/udev rules files as above.
3.  I have kernel auto-assembly of raid enabled.
4.  I don't use an init ram disk.
5.  The root partition is on /dev/md5 (0.99 metadata)
6.  All other partitions except /boot are under /dev/vg7 which is built
on top of /dev/md7 (1.x metadata).
7.  The system boots normally.


I must confess that this boggles my mind. Under these circumstances, I
cannot fathom how - or when - the 1.x arrays are being assembled.
Something has to be executing mdadm at some point.


I think it's udev. I had a look at the rules, but I no grok. I do see
references to mdadm though.

So would I, only you said in step 2 that you have empty rules, which I
take to mean that you had overridden the mdadm-provided udev rules with
empty files.


Correct; that's what I did, but since removing mdadm I've also removed the
corresponding, empty /etc/udev files.

I don't think it's udev any more; I now think the kernel is cleverer than we
gave it credit for (see below and attached dmesg).


Absolutely not ...

https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#A_Note_about_kernel_autodetection_of_different_superblock_formats

https://github.com/torvalds/linux/blob/master/Documentation/md.txt

Both texts state unequivocally that kernel autodetection/assembly only 
works with the old superblock format.


Having read your dmesg.txt, I can only conclude that all of the arrays 
that the kernel is assembling are using the old superblock format, 
contrary to the information you have provided up until now. If so, then 
you do not rely on any of the three methods that I (correctly) said were 
necessary for 1.x superblock arrays.


To settle the matter, check the superblock versions using the method 
described below.





If all of the conditions you describe were true, you would have eliminated
all three of the aformentioned contexts in which mdadm can be invoked. Given
that mdadm is needed to assemble your 1.x arrays (see below), I would expect
such conditions to result in mount errors on account of the missing arrays.

---8

Again, 1.x arrays must be assembled in userspace. The kernel cannot
assemble them by itself as it can with 0.9x arrays. If you uninstall
mdadm, you will be removing the very userspace tool that is employed for
assembly. Neither udev nor mdraid will be able to execute it, which
cannot end well.


I had done that, with no ill effect. I've just booted the box with no mdadm
present. It seems the kernel can after all assemble the arrays (see attached
dmesg.txt, edited). Or maybe I was wrong about the metadata and they're all
0.99. In course of checking this I tried a couple of things:

# lvm pvck /dev/md7
   Found label on /dev/md7, sector 1, type=LVM2 001
   Found text metadata area: offset=4096, size=1044480
# lvm vgdisplay
   --- Volume group ---
   VG Name   vg7
   System ID
   Formatlvm2
   Metadata Areas1
   Metadata Sequence No  14
   VG Access read/write
   VG Status resizable
   MAX LV0
   Cur LV13
   Open LV   13
   Max PV0
   Cur PV1
   Act PV1
   VG Size   500.00 GiB
   PE Size   4.00 MiB
   Total PE  127999
   Alloc PE / Size   108800 / 425.00 GiB
   Free  PE / Size   19199 / 75.00 GiB
   VG UUID   ll8OHc-if2H-DVTf-AxrQ-5EW0-FOLM-Z73y0z

Can you tell from that which metadata version I used when I created vg7? It
looks like 1.x to me, since man lvm refers to formats (=metadata types) lvm1
and lvm2 - or am I reading too much into that?


LVM has nothing to do with md. I did allude to this in my first response 
on the thread. The above output demonstrates that you have designated an 
md block device as a PV (LVM physical volume). Any block device can be a 
PV - LVM does not care.


When I talk about 1.x metadata, I am talking about the md superblock. 
You can find out what the metadata format is like so:-


# mdadm --detail /dev/md7 | grep Version

To be clear, LVM does not enter into it.



See

Re: [gentoo-user] Software RAID-1 - FIXED

2014-08-26 Thread Kerin Millar

On 26/08/2014 17:49, Peter Humphrey wrote:

On Tuesday 26 August 2014 17:00:37 Kerin Millar wrote:

On 26/08/2014 15:54, Peter Humphrey wrote:

On Tuesday 26 August 2014 14:21:19 Kerin Millar wrote:

On 26/08/2014 10:38, Peter Humphrey wrote:

On Monday 25 August 2014 18:46:23 Kerin Millar wrote:

On 25/08/2014 17:51, Peter Humphrey wrote:

On Monday 25 August 2014 13:35:11 Kerin Millar wrote:

---8


Again, can you find out what the exit status is under the circumstances
that mdadm produces a blank error? I am hoping it is something other
than 1.

I've remerged mdadm to run this test. I'll report the result in a moment.
[...] In fact it returned status 1. Sorry to disappoint :)


Thanks for testing. Can you tell me exactly what /etc/mdadm.conf
contained at the time?


It was the installed file, untouched, which contains only comments.


LVM has nothing to do with md.


No, I know. I was just searching around for sources of info.


When I talk about 1.x metadata, I am talking about the md superblock.
You can find out what the metadata format is like so:-

# mdadm --detail /dev/md7 | grep Version


That's what I was looking for - thanks. It shows version 0.90. I did suspect
that before, as I said, but couldn't find the command to check. If I had, I
might not have started this thread.

So all this has been for nothing. I was sure I'd set 1.x metadata when
creating the md device, but I must eat humble pie and glare once again at my
own memory.


Not to worry. However, I still think that it's a bug that mdadm behaves 
as it does, leading to the curious behaviour of the mdraid script. 
Please consider filing one and, if you do so, cc me into it. I have an 
interest in pursuing it.


--Kerin



Re: [gentoo-user] Software RAID-1

2014-08-25 Thread Kerin Millar

On 25/08/2014 10:22, Peter Humphrey wrote:

On Sunday 24 August 2014 19:22:40 Kerin Millar wrote:

On 24/08/2014 14:51, Peter Humphrey wrote:

---8

So I decided to clean up /etc/mdadm.conf by adding these lines:

DEVICE /dev/sda* /dev/sdb*
ARRAY /dev/md5 devices=/dev/sda5,/dev/sdb5
ARRAY /dev/md7 devices=/dev/sda7,/dev/sdb7
ARRAY /dev/md9 devices=/dev/sda9,/dev/sdb9


Perhaps you should not include /dev/md5 here.


I wondered about that.


As you have made a point of building the array containing the root
filesystem with 0.99 metadata, ...


...as was instructed in the howto at the time...


I would assume that it is being assembled in kernelspace as a result of
CONFIG_MD_AUTODETECT being enabled.


Yes, I think that's what's happening.


Alternatively, perhaps you are using an initramfs.


Nope.


Either way, by the time the mdraid init.d script executes, the /dev/md5
array must - by definition - be up and mounted. Does it make a
difference if you add the following line to the config?

AUTO +1.x homehost -all

That will prevent it from considering arrays with 0.99 metadata.


No, I get the same result. Just a red asterisk at the left end of the line
after Starting up RAID devices...


It since dawned upon me that defining AUTO as such won't help because 
you define the arrays explicitly. Can you try again with the mdraid 
script in the default runlevel but without the line defining /dev/md5?




Now that I look at /etc/init.d/mdraid I see a few things that aren't quite
kosher. The first is that it runs mdadm -As 21, which returns null after
booting is finished (whence the empty line before the asterisk). Then it tests


Interesting. I think that you should file a bug because the implication 
is that mdadm is returning a non-zero exit status in the case of arrays 
that have already been assembled. Here's a post from the Arch forums 
suggesting the same:


https://bbs.archlinux.org/viewtopic.php?pid=706175#p706175

Is the exit status something other than 1? Try inserting eerror $? 
immediately after the call to mdadm -As. Granted, it's just an annoyance 
but it looks silly, not to mention unduly worrying.



for the existence of /dev/md_d*. That also doesn't exist, though /dev/md*
does:

# ls -l /dev/md*
brw-rw 1 root disk 9, 0 Aug 25 10:03 /dev/md0
brw-rw 1 root disk 9, 5 Aug 25 10:03 /dev/md5
brw-rw 1 root disk 9, 7 Aug 25 10:03 /dev/md7
brw-rw 1 root disk 9, 9 Aug 25 10:03 /dev/md9

/dev/md:
total 0
lrwxrwxrwx 1 root root 6 Aug 25 10:03 5_0 - ../md5
lrwxrwxrwx 1 root root 6 Aug 25 10:03 7_0 - ../md7
lrwxrwxrwx 1 root root 6 Aug 25 10:03 9_0 - ../md9



I think this has something to do with partitionable RAID. Yes, it is 
possible to superimpose partitions upon an md device, though I have 
never seen fit to do so myself. For those that do not, the md_d* device 
nodes won't exist.



Looks like I have some experimenting to do.

I forgot to mention in my first post that, on shutdown, when the script runs
mdadm -Ss 21 I always get Cannot get exclusive access to /dev/md5...
I've always just ignored it until now, but perhaps it's important?


I would guess that it's because a) the array hosts the root filesystem 
b) you have the array explicitly defined in mdadm.conf and mdadm is 
being called with -s/--scan again.





On a related note, despite upstream's efforts to make this as awkward as
possible, it is possible to mimic the kernel's autodetect functionality
in userspace with a config such as this:

HOMEHOST ignore
DEVICE partitions
AUTO +1.x -all

Bear in mind that the mdraid script runs `mdadm --assemble --scan`.
There is no need to specifically map out the properties of each array.
This is what the metadata is for.


Thanks for the info, and the help. The fog is dispersing a bit...





Re: [gentoo-user] Software RAID-1

2014-08-25 Thread Kerin Millar

On 25/08/2014 13:18, Kerin Millar wrote:

snip



No, I get the same result. Just a red asterisk at the left end of the
line
after Starting up RAID devices...


It since dawned upon me that defining AUTO as such won't help because
you define the arrays explicitly. Can you try again with the mdraid
script in the default runlevel but without the line defining /dev/md5?


Sorry, I wasn't clear. Would you remove/comment the line describing 
/dev/md5 but also include this line:-


  AUTO +1.x -all

--Kerin



Re: [gentoo-user] Software RAID-1

2014-08-25 Thread Kerin Millar

On 25/08/2014 12:17, Peter Humphrey wrote:

snip


Well, it was simple. I just said rc-update del mdraid boot and all is now
well. I'd better revisit the docs to see if they still give the same advice.

-- Regards Peter


Very interesting indeed. I now wonder if this is a race condition 
between the init script running `mdadm -As` and the fact that the mdadm 
package installs udev rules that allow for automatic incremental 
assembly? Refer to /lib/udev/rules.d/64-md-raid.rules and you'll see 
that it calls `mdadm --incremental` for newly added devices.


With that in mind, here's something else for you to try. Doing this will 
render these udev rules null and void:


# touch /etc/udev/rules.d/64-md-raid.rules

Thereafter, the mdraid script will be the only agent trying to assemble 
the 1.x metadata arrays so make sure that it is re-enabled.


I'm not actually sure that there is any point in calling mdadm -As where 
the udev rules are present. I would expect it to be one approach or the 
other, but not both at the same time.


Incidentally, the udev rules were a source of controversy in the 
following bug. Not everyone appreciates that they are installed by default.


https://bugs.gentoo.org/show_bug.cgi?id=401707

--Kerin



Re: [gentoo-user] Software RAID-1

2014-08-25 Thread Kerin Millar

On 25/08/2014 17:51, Peter Humphrey wrote:

On Monday 25 August 2014 13:35:11 Kerin Millar wrote:

On 25/08/2014 12:17, Peter Humphrey wrote:

snip


Well, it was simple. I just said rc-update del mdraid boot and all is
now
well. I'd better revisit the docs to see if they still give the same
advice.


Very interesting indeed.


You wrote this e-mail after the other two, so I'll stick to this route,
leaving the other idea for later if needed.


I now wonder if this is a race condition between the init script running
`mdadm -As` and the fact that the mdadm package installs udev rules that
allow for automatic incremental assembly?


Isn't it just that, with the kernel auto-assembly of the root partition, and
udev rules having assembled the rest, all the work's been done by the time the
mdraid init script is called? I had wondered about the time that udev startup
takes; assembling the raids would account for it.


Yes, it's a possibility and would constitute a race condition - even 
though it might ultimately be a harmless one. As touched upon in the 
preceding post, I'd really like to know why mdadm sees fit to return a 
non-zero exit code given that the arrays are actually assembled 
successfully.


After all, even if the arrays are assembled at the point that mdadm is 
executed by the mdraid init script, partially or fully, it surely ought 
not to matter. As long as the arrays are fully assembled by the time 
mdadm exits, it should return 0 to signify success. Nothing else makes 
sense, in my opinion. It's absurd that the mdraid script is drawn into 
printing a blank error message where nothing has gone wrong.


Further, the mdadm ebuild still prints elog messages stating that mdraid 
is a requirement for the boot runlevel but, with udev rules, I don't see 
how that can be true. With udev being event-driven and calling mdadm 
upon the introduction of a new device, the array should be up and 
running as of the very moment that all the disks are seen, no matter 
whether the mdraid init script is executed or not.



Refer to /lib/udev/rules.d/64-md-raid.rules and you'll see that it calls
`mdadm --incremental` for newly added devices.


# ls -l /lib/udev/rules.d | grep raid
-rw-r--r-- 1 root root 2.1K Aug 23 10:34 63-md-raid-arrays.rules
-rw-r--r-- 1 root root 1.4K Aug 23 10:34 64-md-raid-assembly.rules


With that in mind, here's something else for you to try. Doing this will
render these udev rules null and void:

# touch /etc/udev/rules.d/64-md-raid.rules


I did that, but I think I need instead to
# touch /etc/udev/rules.d/63-md-raid-arrays.rules
# touch /etc/udev/rules.d/64-md-raid-assembly.rules


Ah, yes. Looks like the rules have changed in =mdadm-3.3. I'm still 
using mdadm-3.2.6-r1.




I'll try it now...


Thereafter, the mdraid script will be the only agent trying to assemble
the 1.x metadata arrays so make sure that it is re-enabled.


Right. Here's the position:
1.  I've left /etc/init.d/mdraid out of all run levels. I have nothing but
comments in mdadm.conf, but then it's not likely to be read anyway if 
the
init script isn't running.
2.  I have empty /etc/udev rules files as above.
3.  I have kernel auto-assembly of raid enabled.
4.  I don't use an init ram disk.
5.  The root partition is on /dev/md5 (0.99 metadata)
6.  All other partitions except /boot are under /dev/vg7 which is built on
top of /dev/md7 (1.x metadata).
7.  The system boots normally.


I must confess that this boggles my mind. Under these circumstances, I 
cannot fathom how - or when - the 1.x arrays are being assembled. 
Something has to be executing mdadm at some point.





I'm not actually sure that there is any point in calling mdadm -As where
the udev rules are present. I would expect it to be one approach or the
other, but not both at the same time.


That makes sense to me too. Do I even need sys-fs/mdadm installed? Maybe I'll
try removing it. I have a little rescue system in the same box, so it'd be
easy to put it back if necessary.


Yes, you need mdadm because 1.x metadata arrays must be assembled in 
userspace. In Gentoo, there are three contexts I know of in which this 
may occur:-


  1) Within an initramfs
  2) As a result of the udev rules
  3) As a result of the mdraid script




Incidentally, the udev rules were a source of controversy in the
following bug. Not everyone appreciates that they are installed by default.

https://bugs.gentoo.org/show_bug.cgi?id=401707


I'll have a look at that - thanks.



--Kerin



Re: [gentoo-user] Software RAID-1

2014-08-24 Thread Kerin Millar

On 24/08/2014 14:51, Peter Humphrey wrote:

Hello list,

For several years I've been running with / on /dev/md5 (0.99 metadata), which
is built on /dev/sd[ab]5. At each boot I see a message scroll by saying
something like No devices found in config file or automatically and then lvm


LVM does not handle md arrays.


continues to assemble md5 anyway and mount its file system. The rest of my
partitions are on /dev/md7 (1.0 metadata), which is built on /dev/sd[ab]7. Oh,
except for /boot, which is on /dev/sda1 with a copy on /dev/sdb1.

So I decided to clean up /etc/mdadm.conf by adding these lines:

DEVICE /dev/sda* /dev/sdb*
ARRAY /dev/md5 devices=/dev/sda5,/dev/sdb5
ARRAY /dev/md7 devices=/dev/sda7,/dev/sdb7
ARRAY /dev/md9 devices=/dev/sda9,/dev/sdb9



Perhaps you should not include /dev/md5 here. As you have made a point 
of building the array containing the root filesystem with 0.99 metadata, 
I would assume that it is being assembled in kernelspace as a result of 
CONFIG_MD_AUTODETECT being enabled. Alternatively, perhaps you are using 
an initramfs.


Either way, by the time the mdraid init.d script executes, the /dev/md5 
array must - by definition - be up and mounted. Does it make a 
difference if you add the following line to the config?


  AUTO +1.x homehost -all

That will prevent it from considering arrays with 0.99 metadata.

On a related note, despite upstream's efforts to make this as awkward as 
possible, it is possible to mimic the kernel's autodetect functionality 
in userspace with a config such as this:


  HOMEHOST ignore
  DEVICE partitions
  AUTO +1.x -all

Bear in mind that the mdraid script runs `mdadm --assemble --scan`. 
There is no need to specifically map out the properties of each array. 
This is what the metadata is for.


--Kerin



Re: [gentoo-user] emerge --config

2014-08-21 Thread Kerin Millar

On 21/08/2014 12:07, Bill Kenworthy wrote:

Hi,
I am building some VM's using scripts and want to run emerge --config
mariadb automaticly.  However it asks for a new root password (entered
twice) as part of the process - I was going to make an expect script to
enter the password for me ... but I thought someone might know a better way?



You can execute mysql_install_db directly instead of relying on 
distro-specific voodoo. Something like this should do the trick:-


#!/bin/bash
set -e

# Use SELECT PASSWORD('password') to generate a hash
password_hash=*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19

# Do nothing if MySQL appears to have been configured
[[ -f /var/lib/mysql/ibdata1 ]]  exit 0

# Initialize the database
mysql_install_db

# Set root password, delete spurious root accounts, drop test database
mysql -B -SQL
SET PASSWORD FOR 'root'@'%' = '${password_hash}';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
DELETE FROM mysql.user WHERE Host != '%';
FLUSH PRIVILEGES;
DROP DATABASE test;
SQL

--Kerin



Re: [gentoo-user] bash script question

2014-08-18 Thread Kerin Millar

On 18/08/2014 12:29, Stroller wrote:


On Mon, 18 August 2014, at 10:42 am, wraeth wra...@wraeth.id.au wrote:


On Mon, 2014-08-18 at 18:54 +1000, Adam Carter wrote:

But this matches if grep fails both times as well as when it matches both
time. Any ideas?


If you don't mind using a quick loop, you could use something like:

n=0
for f in file1.txt file2.txt file3.txt file4.txt; do
grep 'string' ${f}  /dev/null  n=$[n+1]
done

if [[ $n == 4 ]]; then
do_something
fi


I've solved similar problems the same way myself, but I hope you'll forgive me 
for offering unsolicited critique on a small detail.

In the above 4 is a constant, and thus it's independent of the number of files 
being tested.

I propose addressing this with an array of the filenames.

Thus additional files can be added for testing, without manual adjustment of 
the expected total.

   files=(file1.txt file2.txt file3.txt file4.txt)
   n=0
   for f in ${files[@]}; do
  grep 'string' ${f}  /dev/null  n=$[n+1]


I would write `grep -q -m1 -F 'string' ...` here. In particular, -m1 
will short-circuit after finding one match.



   done

   if [[ $n == ${#files[@]} ]]; then
  do_something
   fi

Bash array syntax is a bit arcane, but at least these very useful data 
structures are available.


Here's a slightly different take. It avoids multiple grep invocations, 
which could be a good thing in the case of a lengthy file list.


files=(file1.txt file2.txt file3.txt file4.txt)
string=matchme # Using -F below as it's a string, not a regex

count=0
while read -r matches; do
(( count += matches ))
done  (grep -hcm1 -F $string ${files[*]})

if (( count == ${#files[@]} )); then
do_something
fi

--Kerin




Re: [gentoo-user] bash script question

2014-08-18 Thread Kerin Millar

On 18/08/2014 15:02, Stroller wrote:


On Mon, 18 August 2014, at 1:16 pm, Kerin Millar kerfra...@fastmail.co.uk 
wrote:
...

(( count += matches ))
done  (grep -hcm1 -F $string ${files[*]})


Oh, this is lovely.

I've learned some things today.


if (( count == ${#files[@]} )); then


May I ask why you prefer these brackets for evaluation, please?


There was no particular reason, other than to maintain consistency in 
the example (both for evaluation and as an alternative to expansion). 
Sometimes, I find double square brackets to be a bit of an eyesore, but 
do tend to use them more often than not.


I particularly like double parentheses for checking exit codes assigned 
to variables. For example:


(( $? ))  echo something went wrong

As opposed to having to perform an explicit comparison:

[[ $? != 0 ]]  echo something went wrong

--Kerin



Re: [gentoo-user] bash script question

2014-08-18 Thread Kerin Millar

On 18/08/2014 15:18, Kerin Millar wrote:

On 18/08/2014 15:02, Stroller wrote:


On Mon, 18 August 2014, at 1:16 pm, Kerin Millar
kerfra...@fastmail.co.uk wrote:
...

(( count += matches ))
done  (grep -hcm1 -F $string ${files[*]})


Oh, this is lovely.

I've learned some things today.


if (( count == ${#files[@]} )); then


May I ask why you prefer these brackets for evaluation, please?


There was no particular reason, other than to maintain consistency in
the example (both for evaluation and as an alternative to expansion).
Sometimes, I find double square brackets to be a bit of an eyesore, but
do tend to use them more often than not.

I particularly like double parentheses for checking exit codes assigned
to variables. For example:

(( $? ))  echo something went wrong

As opposed to having to perform an explicit comparison:

[[ $? != 0 ]]  echo something went wrong


Oops, I meant to use the -ne operator there. Not that it actually makes 
a difference for this test.


--Kerin



Re: [gentoo-user] Changing glibc

2014-08-18 Thread Kerin Millar

On 18/08/2014 19:06, Timur Aydin wrote:

Hi,

I am using a closed source software package on my 64 bit gentoo linux
system. The software package is beyond compare by scooter soft.
Because of the way this package is built, it needs a specially patched
version of glibc. I have patched my existing glibc version (2.18) and
have been avoiding updating my glibc since. Now I am wondering whether
the latest update of bcompare will work with the latest glibc (2.19).

So, if I upgrade to 2.19 and the package doesn't work, how can I go back
to the working, patched 2.18? I know that portage issues the most scary
warnings when you try to downgrade glibc. So what does the community
recommend?



You should be able to downgrade glibc, provided that you haven't built 
and installed any new packages following the transition from glibc-2.18 
to glibc-2.19. That said, I would suggest that you back up the root 
filesystem as a contingency measure.


Still, why not test bcompare in a chroot? The latest stage3 tarball 
probably includes glibc-2.19 by now.


--Kerin



Re: [gentoo-user] Python and man problems

2014-08-04 Thread Kerin Millar

On 04/08/2014 15:46, Roger Cahn wrote:


Hello all,

I encounter some problems with python compiling and man errors.

1- Python :

I can't emerge some python packages (ie pyorbit, libmpeg2,
libbonobo-python, etc...).
Even the installed python packages can't emerge anymore.
Here is the error message for libgnome-python-2.28.1-r1
(USE=-examples PYTHON_TARGETS=-python2_7%)


Your PYTHON_TARGETS setting does not make sense. If you want python 
modules to target python-2.7 then define:


PYTHON_TARGETS=python2_7

Otherwise, define python3.3 (or even both, space separated).



*ERROR: dev-python/libgnome-python-2.28.1-r1::gentoo failed (configure
phase):
  *   No supported Python implementation in PYTHON_TARGETS.
  *
  * Call stack:
  * ebuild.sh, line   93:  Called src_configure
  *   environment, line 4118:  Called gnome-python-common-r1_src_configure
  *   environment, line 2078:  Called python_parallel_foreach_impl
'gnome2_src_configure' '--disable-allbindings' '--enable-gnome'
'--enable-gnomeui'
  *   environment, line 3947:  Called _python_obtain_impls
  *   environment, line  702:  Called _python_validate_useflags
  *   environment, line  758:  Called die
  * The specific snippet of code:
  *   die No supported Python implementation in PYTHON_TARGETS.
*



As above.


Is it a locale problem ?

Here is my /etc/env.d/02locale

LANG=fr_FR.utf8
LC_ALL=

When i run locale, i get :

$ locale
LANG=fr_FR.UTF-8
LC_CTYPE=fr_FR.UTF-8
LC_NUMERIC=fr_FR.UTF-8
LC_TIME=fr_FR.UTF-8
LC_COLLATE=fr_FR.UTF-8
LC_MONETARY=fr_FR.UTF-8
LC_MESSAGES=fr_FR.UTF-8
LC_PAPER=fr_FR.UTF-8
LC_NAME=fr_FR.UTF-8
LC_ADDRESS=fr_FR.UTF-8
LC_TELEPHONE=fr_FR.UTF-8
LC_MEASUREMENT=fr_FR.UTF-8
LC_IDENTIFICATION=fr_FR.UTF-8
LC_ALL=

2- man does not work anymore ; i get the error message :

ie : $man emerge (sorry, it's in french)


You can make it English by prefixing the command with LC_MESSAGES=C.


*man emerge
sh: most : commande introuvable
Erreur pendant l'ex�cution du formatage ou de l'affichage.
Le syst�me retourne pour (cd /usr/share/man  (echo .ll 11.1i; echo
.nr LL 11.1i; echo .pl 1100i; /bin/bzip2 -c -d
'/usr/share/man/man1/emerge.1.bz2'; echo .\\\; echo .pl \n(nlu+10)
| /usr/bin/gtbl | /usr/bin/nroff -mandoc | most) l'erreur 127.
Il n'y a pas de page de manuel pour emerge.
*
I recompiled the sys-apps/man package with success but the error is
still there.


most ... command not found. I've never heard of such a command. It looks 
as though PAGER=most is defined in your environment. In Gentoo, it is 
normally set as /usr/bin/less.


--Kerin



Re: [gentoo-user] Python and man problems

2014-08-04 Thread Kerin Millar

On 04/08/2014 17:50, Kerin Millar wrote:

On 04/08/2014 15:46, Roger Cahn wrote:


snip


*man emerge
sh: most : commande introuvable
Erreur pendant l'ex�cution du formatage ou de l'affichage.
Le syst�me retourne pour (cd /usr/share/man  (echo .ll 11.1i; echo
.nr LL 11.1i; echo .pl 1100i; /bin/bzip2 -c -d
'/usr/share/man/man1/emerge.1.bz2'; echo .\\\; echo .pl \n(nlu+10)
| /usr/bin/gtbl | /usr/bin/nroff -mandoc | most) l'erreur 127.
Il n'y a pas de page de manuel pour emerge.
*
I recompiled the sys-apps/man package with success but the error is
still there.


most ... command not found. I've never heard of such a command. It looks
as though PAGER=most is defined in your environment. In Gentoo, it is
normally set as /usr/bin/less.



It transpires that sys-apps/most is in portage. Emerging this package 
should fix it, if you intend to continue using it as the PAGER.


--Kerin



Re: [gentoo-user] depclean wants to remove all perl?

2014-07-30 Thread Kerin Millar

On 30/07/2014 22:58, Alan McKinnon wrote:

On 30/07/2014 23:47, Mick wrote:

Having updated some perl packages, I ran perl-cleaner which failed with some
blockers, I ran:

emerge --deselect --ask $(qlist -IC 'perl-core/*')

emerge -uD1a $(qlist -IC 'virtual/perl-*')

as advised by perl-cleaner, before I ran perl-cleaner successfully.

Following all this depclean give me a lng list of perl packages, but I am
reluctant to hit yes, before I confirm that this correct:



It's very likely safe:

http://dilfridge.blogspot.com/2014/07/perl-in-gentoo-dev-langperl-virtuals.html

I'm quite sure that whole list is now bundled with perl itself so
there's no need to have the modules as well.


Everything except for IO::Compress and Scalar::List::Utils. I concur; 
nothing about this list appears surprising.


--Kerin



Re: [gentoo-user] resolv.conf is different after every reboot

2014-07-30 Thread Kerin Millar

On 28/07/2014 16:34, Grand Duet wrote:

2014-07-28 1:00 GMT+03:00 Kerin Millar kerfra...@fastmail.co.uk:

On 27/07/2014 21:38, Grand Duet wrote:


2014-07-27 22:13 GMT+03:00 Neil Bothwick n...@digimed.co.uk:


On Sun, 27 Jul 2014 13:33:47 +0300, Grand Duet wrote:


That's what replaces it when eth0 comes up.
It looks like eth0 is not being brought up fully



It sounds logical. But how can I fix it?



By identifying how far it is getting and why no further.
But it appears that eth0 is being brought up correctly
and then the config is overwritten by the lo config.



I think so.

As I have already reported in another reply to this thread,
it is my first reboot after commenting out the line
   dns_domain_lo=mynetwork
and so far it went good.

Moreover, the file /etc/resolv.conf has not been overwritten.

I still have to check if everything else works fine and
if I will get the same result on the next reboot
but I hope that the problem has been solved.

But it looks like a bug in the net csript.
Why lo configuration should overwrite eth0 configuration at all?



I would consider it be a documentation bug at the very least. Being able to
propagate different settings to resolv.conf depending on whether a given
interface is up may be of value for some esoteric use-case, although I
cannot think of one off-hand. Some other distros use the resolvconf
application to handle these nuances.

In any case, it is inexplicable that the user is invited to define
dns_domain for the lo interface. Why would one want to push settings to
resolv.conf based on the mere fact that the loopback interface has come up?
Also, it would be a great deal less confusing if the option were named
dns_search.

I think that the handbook should refrain from mentioning the option at all,
for the reasons stated in my previous email. Those who know that they need
to define a specific search domain will know why and be capable of figuring
it out.

It's too bad that the handbook is still peddling the notion that this
somehow has something to do with 'setting' the domain name. It is tosh of
the highest order.


I agree with you. But how to put it all in the right ears?



I'm not entirely sure. I'd give it another shot if I thought it was 
worth the effort. My experience up until now is that requests for minor 
documentation changes are dismissed on the basis that, if it does not 
prevent the installation from being concluded, it's not worth bothering 
with [1]. I do not rate the handbook and, at this juncture, my concern 
is slight except for where it causes demonstrable confusion among the 
user community. Indeed, that's why my interest was piqued by this thread.


--Kerin

[1] For example: bugs 304727 and 344753



Re: [gentoo-user] depclean wants to remove all perl?

2014-07-30 Thread Kerin Millar

On 30/07/2014 23:12, Mick wrote:

On Wednesday 30 Jul 2014 23:02:38 Kerin Millar wrote:

On 30/07/2014 22:58, Alan McKinnon wrote:

On 30/07/2014 23:47, Mick wrote:

Having updated some perl packages, I ran perl-cleaner which failed with
some blockers, I ran:

emerge --deselect --ask $(qlist -IC 'perl-core/*')

emerge -uD1a $(qlist -IC 'virtual/perl-*')

as advised by perl-cleaner, before I ran perl-cleaner successfully.

Following all this depclean give me a lng list of perl packages, but
I am



reluctant to hit yes, before I confirm that this correct:

It's very likely safe:

http://dilfridge.blogspot.com/2014/07/perl-in-gentoo-dev-langperl-virtual
s.html

I'm quite sure that whole list is now bundled with perl itself so
there's no need to have the modules as well.


Everything except for IO::Compress and Scalar::List::Utils. I concur;
nothing about this list appears surprising.

--Kerin


Thank you both!

I am on dev-lang/perl-5.18.2-r1


As Perl 5.16 is EOL, perhaps that's no bad thing. Incidentally, you can 
check whether a module is part of the Perl core by using the corelist tool.


$ corelist Archive::Tar
Archive::Tar was first released with perl v5.9.3

--Kerin



Re: [gentoo-user] resolv.conf is different after every reboot

2014-07-27 Thread Kerin Millar

On 27/07/2014 12:30, Grand Duet wrote:

2014-07-27 13:39 GMT+03:00 Walter Dnes waltd...@waltdnes.org:

On Sun, Jul 27, 2014 at 12:21:23PM +0300, Grand Duet wrote

This is a continuation of the thread:
Something went wrong with DNS, plz help!

Now, the issue became clearer, so I decided to start
a new thread with more descriptive Subject.

In short: the contents of the file /etc/resolv.conf
is unpredictably different from one reboot to another.
It is either
   # Generated by net-scripts for interface lo
   domain mynetwork
or
   # Generated by net-scripts for interface eth0
   nameserver My.First.DNS-Server.IP
   nameserver My.Second.DNS-Server.IP
   nameserver 8.8.8.8

I tried to chmod this file to be unwrittable even for root
but after a reboot it have been overwritten anyway.


A similar problem was noted at...
https://forums.gentoo.org/viewtopic-t-816332-start-0.html


Like in the thread above, I also have a line
 dns_domain_lo=mynetwork
in my /etc/conf.d/net file. It says nothing to me
and I do not remember how it got there.

But somewhere on Gentoo forum I have found the following
explanation: If you only specify dns_domain_lo=foo and
restart the lo interface it will put domain foo in /etc/resolv.conf
and remove everything else.


You can specify dns_domain - without an interface suffix - which ought 
to prevent this behaviour. However, you'd be better off getting rid of 
it altogether. All the option does is define the suffix(es) that are 
appended by the resolver under certain conditions. These conditions are 
as follows:


  a) the initial name isn't qualified (contains no dots) [1]
  b) the initial name could not be resolved (NXDOMAIN response)

Making up fake domains for this setting, as many Gentoo users are 
induced into doing, serves no purpose. Let's assume that I have 
fakedomain as a search domain in resolv.conf.


Let's see what happens for a short name:

  $ host -t A -v shorthost | grep -e Trying -e NX
  Trying shorthost.fakedomain
  Trying shorthost
  Host shorthost not found: 3(NXDOMAIN)

Result: two spurious DNS lookups, each resulting in NXDOMAIN. You may 
use tcpdump to confirm that there are indeed two.


Now, let's try looking up a fully qualified hostname that happens not to 
exist:


  $ host -t A -v nonexistent.google.com | grep -e Trying -e NX
  Trying nonexistent.google.com
  Trying nonexistent.google.com.fakedomain
  Host nonexistent.google.com not found: 3(NXDOMAIN)

Result: The first lookup fails and is immediately followed by an another 
lookup that is completely and utterly useless. Had a search domain _not_ 
been defined, then the resolver could have concluded its efforts after 
the first NXDOMAIN response.


The bottom line is that it only makes sense to define search domain(s) 
if the following two conditions hold true.


  1) You want to be able to resolve hostnames in their short form
  2) Records for said names will exist in a known, *valid* domain

Otherwise, don't bother and leave it to the DHCP server to decide [2]. 
While I haven't looked at the handbook lately, it has had a history of 
prescribing dns/domain related options without adequate explanation and, 
in some cases, with outright misleading information [3].


On a related note, some people prefer to manage resolv.conf themselves 
and it is not initially obvious as to how to do this while also using 
DHCP. Trying to make the file immutable is not a proper approach. The 
trick is as follows:


  * Specify dhcpd_eth0=nodns (do this for any dhcp-using interfaces)
  * Do not specify any dns or nameserver related settings in conf.d/net

The netifrc scripts will then leave resolv.conf alone.

--Kerin

[1] Check out the ndots option in the resolv.conf(5) manpage
[2] DHCP servers may specify a search domain for clients with option 15
[3] https://bugs.gentoo.org/show_bug.cgi?id=341349



Re: [gentoo-user] NFS tutorial for the brain dead sysadmin?

2014-07-27 Thread Kerin Millar

On 27/07/2014 17:55, J. Roeleveld wrote:

On 27 July 2014 18:25:24 CEST, Stefan G. Weichinger li...@xunil.at wrote:

Am 26.07.2014 04:47, schrieb walt:


So, why did the broken machine work normally for more than a year
without rpcbind until two days ago?  (I suppose because nfs-utils was
updated to 1.3.0 ?)

The real problem here is that I have no idea how NFS works, and each
new version is more complicated because the devs are solving problems
that I don't understand or even know about.


I double your search for understanding ... my various efforts to set up
NFSv4 for sharing stuff in my LAN also lead to unstable behavior and
frustration.

Only last week I re-attacked this topic as I start using puppet here to
manage my systems ... and one part of this might be sharing
/usr/portage
via NFSv4. One client host mounts it without a problem, the thinkpads
don't do so ... just another example ;-)

Additional in my context: using systemd ... so there are other
(different?) dependencies at work and services started.

I'd be happy to get that working in a reliable way. I don't remember
unstable behavior with NFS (v2 back then?) when we used it at a company
I worked for in the 90s.

Stefan


I use NFS for filesharing between all wired systems at home.
Samba is only used for MS Windows and laptops.

Few things I always make sure are valid:
- One partition per NFS share
- No NFS share is mounted below another one
- I set the version to 3 on the clients
- I use LDAP for the user accounts to ensure the UIDs and GIDs are consistent.


These are generally good recommendations. I'd just like to make a few 
observations.


The problems associated with not observing the first constraint (one 
filesystem per export) can be alleviated by setting an explicit fsid. 
Doing so can also help to avoid stale handles on the client side if the 
backing filesystem changes - something that is very useful in a 
production environment. Therefore, I tend to start at 1 and increment 
with each newly added export. For example:-


  /export/foo  *(async,no_subtree_check,fsid=1)
  /export/foo/bar  *(async,no_subtree_check,fsid=2)
  /export/baz  *(async,no_subtree_check,fsid=3)

If using NFSv3, I'd recommend using nolock as a mount option unless 
there is a genuine requirement for locks to be co-ordinated. Such locks 
are only advisory and are of questionable value. Using nolock simplifies 
the requirements on both server and client side, and is beneficial for 
performance.


NFSv3/UDP seems to be limited to a maximum read/write block size of 
32768 in Linux, which will be negotiated by default. Using TCP, the 
upper bound will be the value of /proc/fs/nfsd/max_block_size on the 
server. Its value may be set to 1048576 at the most. NFSv3/TCP is 
problematic so I would recommend NFSv4 if TCP is desired as a transport 
protocol.


NFSv4 provides a useful uid/gid mapping feature that is easier to set up 
and maintain than nss_ldap.




NFS4 requires all the exports to be under a single foldertree.


This is a myth: 
http://linuxcostablanca.blogspot.co.uk/2012/02/nfsv4-myths-and-legends.html. 
Exports can be defined and consumed in the same manner as with NFSv3.




I haven't had any issues in the past 7+ years with this and in the past 5+ 
years I had portage, distfiles and packages shared.
/etc/portage is symlinked to a NFS share as well, allowing me to create binary 
packages on a single host (inside a chroot) which are then used to update the 
different machines.

If anyone wants a more detailed description of my setup. Let me know and I will 
try to write something up.

Kind regards

Joost



--Kerin



Re: [gentoo-user] resolv.conf is different after every reboot

2014-07-27 Thread Kerin Millar

On 27/07/2014 21:38, Grand Duet wrote:

2014-07-27 22:13 GMT+03:00 Neil Bothwick n...@digimed.co.uk:

On Sun, 27 Jul 2014 13:33:47 +0300, Grand Duet wrote:


That's what replaces it when eth0 comes up.
It looks like eth0 is not being brought up fully


It sounds logical. But how can I fix it?


By identifying how far it is getting and why no further.
But it appears that eth0 is being brought up correctly
and then the config is overwritten by the lo config.


I think so.

As I have already reported in another reply to this thread,
it is my first reboot after commenting out the line
  dns_domain_lo=mynetwork
and so far it went good.

Moreover, the file /etc/resolv.conf has not been overwritten.

I still have to check if everything else works fine and
if I will get the same result on the next reboot
but I hope that the problem has been solved.

But it looks like a bug in the net csript.
Why lo configuration should overwrite eth0 configuration at all?


I would consider it be a documentation bug at the very least. Being able 
to propagate different settings to resolv.conf depending on whether a 
given interface is up may be of value for some esoteric use-case, 
although I cannot think of one off-hand. Some other distros use the 
resolvconf application to handle these nuances.


In any case, it is inexplicable that the user is invited to define 
dns_domain for the lo interface. Why would one want to push settings to 
resolv.conf based on the mere fact that the loopback interface has come 
up? Also, it would be a great deal less confusing if the option were 
named dns_search.


I think that the handbook should refrain from mentioning the option at 
all, for the reasons stated in my previous email. Those who know that 
they need to define a specific search domain will know why and be 
capable of figuring it out.


It's too bad that the handbook is still peddling the notion that this 
somehow has something to do with 'setting' the domain name. It is tosh 
of the highest order.


--Kerin



Re: [gentoo-user] [Gentoo-User] emerge --sync likely to kill SSD?

2014-06-19 Thread Kerin Millar

On 19/06/2014 12:56, Rich Freeman wrote:

On Thu, Jun 19, 2014 at 7:44 AM, Neil Bothwick n...@digimed.co.uk wrote:

On Thu, 19 Jun 2014 16:40:08 +0800, Amankwah wrote:


Maybe the only solution is that move the portage tree to HDD??


Or tmpfs if you rarely reboot or have a fast enough connection to your
preferred portage mirror.


There has been a proposal to move it to squashfs, which might
potentially also help.

The portage tree is 700M uncompressed, which seems like a bit much to
just leave in RAM all the time.


The tree will not necessarily be left in RAM all of the time. Pages 
allocated by tmpfs reside in pagecache. Given sufficient pressure, they 
may be migrated to swap. Even then, zswap [1] could be used so as to 
reduce write amplification. I like Neil's suggestion, assuming that the 
need to reboot is infrequent.


--Kerin

[1] https://www.kernel.org/doc/Documentation/vm/zswap.txt



Re: [gentoo-user] RAID 1 vs RAID 0 - Read perfonmance

2014-02-23 Thread Kerin Millar

On 24/02/2014 06:27, Facundo Curti wrote:

Hi. I am again, with a similar question to previous.

I want to install RAID on SSD's.

Comparing THEORETICALLY, RAID0 (stripe) vs RAID1 (mirrior). The
performance would be something like this:

n= number of disks

reads:
   raid1: n*2
   raid0: n*2

writes:
   raid1: n
   raid0: n*2

But, in real life, the reads from raid 0 doesn't work at all, because if
you use chunk size from 4k, and you need to read just 2kb (most binary
files, txt files, etc..). the read speed should be just of n.


While the workload does matter, that's not really how it works. Be aware 
that Linux implements read-ahead (defaulting to 128K):-


# blockdev --getra /dev/sda
256

That's enough to populate 32 pages in pagecache, given that PAGESIZE is 
4K on i386/am64.




On the other side, I read over the net, that kernel don't support
multithread reads on raid1. So, the read speed will be just n. Always.
¿It is true?


No, it is not true. Read balancing is implemented in RAID-1.



Anyway, my question is. ¿Who have the best read speed for the day to
day? I'm not asking about reads off large files. I'm just asking in the
normal use. Opening firefox, X, regular files, etc..


For casual usage, it shouldn't make any difference.



I can't find the guide definitive. It allways are talking about
theoretically performance, or about real life but without benchmarks
or reliable data.

Having a RAID0 with SSD, and following [2] on SSD Stripe Optimization
should I have the same speed as an RAID1?


I would highly recommend conducting your own benchmarks. I find sysbench 
to be particularly useful.





My question is because i'm between. 4 disks raid1, or RAID10 (I want
redundancy anyway..). And as raid 10 = 1+ 0. I need to know raid0
performance to take a choice... I don't need write speed, just read.


In Linux, RAID-10 is not really nested because the mirroring and 
striping is fully integrated. If you want the best read performance with 
RAID-10 then the far layout is supposed to be the best [1].


Here is an example of how to choose this layout:

# mdadm -C /dev/md0 -n 4 -l 10 -p f2 /dev/sda /dev/sdb /dev/sdc /dev/sdd

Note, however, that the far layout will exhibit worse performance than 
the near layout if the array is in a degraded state. Also, it 
increases seek time in random/mixed workloads but this should not matter 
if you are using SSDs.


--Kerin

[1] http://neil.brown.name/blog/20040827225440



Re: [gentoo-user] RAID 1 install guide?

2014-02-22 Thread Kerin Millar

On 05/09/2013 07:13, J. Roeleveld wrote:

On Thu, September 5, 2013 05:04, James wrote:

Hello,

What would folks recommend as a Gentoo
installation guide for a 2 disk Raid 1
installation? My previous attempts all failed
to trying to follow (integrate info from)
a myriad-malaise of old docs.


I would start with the Raid+LVM Quick install guide:
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml


It seems much of the documentation for such is
deprecated, with large disk, newer file systems
(ZFS vs ext4 vs ?) UUID, GPT mdadm,  etc etc.


Depending on the size of the disk, fdisk or gdisk needs to be used.
Filesystems, in my opinion, matter only for the data intended to be put on.

For raid-management, I use mdadm. (Using the linux kernel software raid)
If you have a REAL hardware raid card, I would recommend using that.
(Cheap and/or onboard raid is generally slower then the software raid
implementation in the kernel and the added bonus of being able to recover
the raid using any other linux installation helps.


File system that is best for a Raid 1 workstation?


I use Raid0 (striping) on my workstations with LVM and, mostly, ext4
filesystems. The performance is sufficient for my needs.
All my important data is stored on a NAS with hardware Raid-6, so I don't
care if I loose the data on the workstations.


File system that is best for a Raid 1
(casual usage) web server ?


Whichever filesystem would be best if you don't use Raid.
Raid1 means all data is duplicated, from a performance P.O.V., it is not a
good option. Not sure if distributed reads are implemented yet in the
kernel.


They are. It's perfectly good for performance, provided the array is of 
insufficient magnitude to encounter a bottleneck pertaining to 
controller/bus bandwidth.


--Kerin



Re: [gentoo-user] RAID 1 on /boot

2014-02-22 Thread Kerin Millar

On 22/02/2014 11:41, J. Roeleveld wrote:

On Sat, February 22, 2014 06:27, Facundo Curti wrote:

Hi all. I'm new in the list, this is my third message :)
First at all, I need to say sorry if my english is not perfect. I speak
spanish. I post here because gentoo-user-es it's middle dead, and it's a
great chance to practice my english :) Now, the problem.


First of all, there are plenty of people here who don't have English as a
native language. Usually we manage. :)


I'm going to get a new PC with a disc SSD 120GB and another HDD of 1TB.
But in a coming future, I want to add 2 or more disks SSD.

Mi idea now, is:

 Disk HHD: /dev/sda
/dev/sda1 26GB
/dev/sda2 90GB
/dev/sda3 904GB

 Disk SSD: /dev/sdb
/dev/sdb1 26GB
/dev/sdb2 90GB
/dev/sdb3 4GB

And use /dev/sdb3 as swap. (I will add more with another SSD in future)
/dev/sda3 mounted in /home/user/data (to save data unused)


Why put the swap on the SSD?


And a RAID 1 with:
md0: sda1+sdb1/
md1: sda2+sdb2/home

(sda1 and sda2 will be made with the flag: write-mostly. This is useful
for
disks slower).
In a future, I'm going to add more SSD's on this RAID. My idea is the
fastest I/O.

Now. My problem/question is:
Following the gentoo's
dochttp://www.gentoo.org/doc/es/gentoo-x86+raid+lvm2-quickinstall.xml,
it says I need to put the flag --metadata=0.9 on the RAID. My question is
¿This will make get off the performance?.


It has no impact on performance.



metadata=0.9 might be necessary for the BIOS of your computer to see the
/boot partition. If you use an initramfs, you can use any metadata you
like for the root-partition.


The BIOS should not care at all as it is charged only with loading code 
from the MBR. However, if the intention is to use grub-0.97 then the 
array hosting the filesystem containing /boot should:


  * use RAID-1
  * use the 0.90 superblock format

That way, grub-0.97 can read the filesystem from either block device 
belonging to the array. Doing it any other way requires a bootloader 
that specifically understands md (such as grub2).


There's also the neat trick of installing grub to all disks belonging to 
the array for bootloader redundancy. However, I'm not entirely sure that 
Code Listing 2.35 in the Gentoo doc is correct. Given that particular 
example, I would instead do it like this:-


  grub device (hd0) /dev/sda
  grub root (hd0,0)
  grub setup (hd0)
  grub device (hd0) /dev/sdb
  grub root (hd0,0)
  grub setup (hd0)

The idea there is that, should it ever be necessary to boot from the 
second disk, the disk in question would be the one enumerated first by 
the BIOS (mapping to hd0 in grub). Therefore, grub should be installed 
in that context across all disks. It should not be allowed to boot from 
any given drive and subsequently try to access =(hd1).


With grub2, it's a little easier because it is only necessary to run 
grub-install on each of the drives:


  # grub-install /dev/sda
  # grub-install /dev/sdb




I only found this
documenthttps://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-0.90_Superblock_Format.
This says the difference, but nothing about performance and
advantages/disadvantages.


The 0.90 superblock format is subject to specific limitations that are 
clearly described by that page. For example, it is limited to 28 devices 
in an array, with each device being limited to 2TB in size.


Also, the 0.90 format will cause issues in certain setups because of the 
way that it places its metadata at the end of the block device [1]. That 
said, the 0.90 format does allow for the kernel to construct the array 
without any intervention from userspace so it still has its uses.


The 1.2 format positions the superblock 4KiB from the beginning of the 
device. Note that this has nothing at all to do with the data, which 
usually begins 1MiB in. If you run mdadm -E on a member of such an 
array, the offset will be reported as the Data Offset. For example:


  Data Offset : 2048 sectors

So, it's not a matter of alignment. Rather, the advantage of the 1.2 
format is that it leaves a little space for bootloader code e.g. in case 
you want to create an array from whole disks rather than disk partitions.


None of this matters to me so I tend to stick to the 1.1 format. It 
wouldn't actually make any difference to my particular use case.





Another question is, ¿GRUB2 still unsupporting metadata 1.2?


See reply from Canek.


In case that metadata get off performance, and GRUB2 doesn't support this.
¿Anyone knows how can I fix this to use metadata 1.2?

I don't partitioned more, because I saw this unnecessary. I just need to
separate /home in case I need to format the system. But if I need to
separate /boot to make it work, I don't have problems doing that.

But of course, /boot also as RAID...


/boot seperate as RAID-1 and metadata=0.9 and you are safe.


¿Somebody have any ideas to make it work?


It is similar to what I do, except I don't have SSDs in my desktop.

I have 

Re: [gentoo-user] Re: How to run 2.6.25 kernel (no DEVTMPFS)?

2014-02-14 Thread Kerin Millar

On 14/02/2014 21:31, Grant Edwards wrote:

On 2014-02-14, Mike Gilbert flop...@gentoo.org wrote:

On Fri, Feb 14, 2014 at 4:12 PM, Grant Edwards
grant.b.edwa...@gmail.com wrote:

I need to do some testing with kernels as far back as 2.6.25.  I've
currently got a Gentoo box that can build and run kernels ranging from
3.14.rc2 to 2.6.32. There are various gcc and make issues which have
been successfully dealt with, but now I'm stuck on DEVTMPFS.

Prior to 2.6.32 DEVTMPFS isn't available, so even though I can build
and boot a 2.6.25 kernel, udev craps out.

There are plenty of spare paritions to play with, so doing a Linux
install to test with kernels older than 2.6.32 is no problem.

I'm wondering if instead of downloading an old Ubuntu or Fedora DVD,
is there any way to install an old version of Gentoo that will work
with pre-DEVTMPFS kernels?


Do you actually need udev?


Good question -- I probably don't.  For the testing in question I
should be able to live with a static /dev directory.  Is there any
documentation on doing a Gentoo install without udev?


If you can get away with just having a static /dev with pre-created
device nodes, that would be the simplest solution.


It would probably be asking for too much to try to toggle between udev
and static /dev at boot time in a single installation...



I remember that it was possible to toggle before openrc was introduced.

As things stand now, you would probably have to replace sys-fs/udev with 
sys-fs/static-dev (which satisfies virtual/dev-manager). NeddySeagoon 
mentions it here:


http://dev.gentoo.org/~neddyseagoon/Old_Fashioned_Gentoo_2.xml

He also describes its coverage as being incomplete. In that case, this 
may help to populate /dev to a reasonable extent:


https://bugs.gentoo.org/show_bug.cgi?id=368597#c97

--Kerin



Re: [gentoo-user] User eix-sync permissions problem

2014-02-10 Thread Kerin Millar

On 10/02/2014 19:03, Walter Dnes wrote:

On Mon, Feb 10, 2014 at 05:09:55PM +, Stroller wrote


On Mon, 10 February 2014, at 4:55 pm, Gleb Klochkov glebiu...@gmail.com wrote:


Hi. Try to use sudo with no password for eix-sync.


I'd really rather not. Thanks, though.


   Being in group portage is not enough.  That merely lets you do
emerges with --pretend.  emerge --sync modifies files in
/usr/portage.  Files and directories in /usr/portage/ are user:group
root:root.  Therefore you *NEED* root-level permission to modify them.
No ifs/ands/ors/buts.  The overall easiest method is to (as root)...


Your are mistaken. The usersync FEATURE is a default. You can rename 
your make.conf file and invoke portageq envvar FEATURES to verify this. 
The consequence of this feature being enabled is that portage assumes 
the privileges of the owner of /usr/portage. The entire point of this is 
that portage doesn't have to exec rsync as root. Doing so is both 
needless and dangerous.


Ergo, recursively setting the permissions of /usr/portage to 
portage:portage is actually a really good idea. Indeed, you should find 
that recent portage snapshot tarballs extract in such a way that portage 
is both the owner and associated group.


The problem the OP is having concerns only the file modes, which is a 
separate matter altogether.


--Kerin



Re: [gentoo-user] User eix-sync permissions problem

2014-02-10 Thread Kerin Millar

On 10/02/2014 16:05, Stroller wrote:

Hello all,

I'm a little bit rusty, but my recollection is that I should be able to perform 
`eix-sync` (or `emerge --sync`?) as a user to synchronise my local copy of the 
portage tree with Gentoo's master portage tree.

User is in the portage group:

$ whoami
stroller
$ groups stroller
wheel audio video portage cron users
$

Yet I get these permissons denied errors:

$ eix-sync
  * Running emerge --sync

Synchronization of repository 'gentoo' located in '/usr/portage'...
Starting rsync with rsync://91.186.30.235/gentoo-portage...
Checking server timestamp …

…
receiving incremental file list
rsync: delete_file: unlink(app-accessibility/caribou/caribou-0.4.12.ebuild) 
failed: Permission denied (13)
rsync: delete_file: 
unlink(app-accessibility/emacspeak/files/emacspeak-33.0-respect-ldflags.patch) 
failed: Permission denied (13)
rsync: delete_file: 
unlink(app-accessibility/emacspeak/files/emacspeak-33.0-greader-garbage.patch) 
failed: Permission denied (13)

(full output attached)


Googling the problem I see a bunch of Gentoo Forums posts talking about 
changing at random the permissions of /var/tmp/ or /var/tmp/portage/, but no 
rationale is given, and I don't think this is the cause:

$ emerge --info | grep -i tmpdir
PORTAGE_TMPDIR=/var/tmp
$ ls -ld /var/tmp/
drwxrwxrwt 3 root root 4096 Feb  5 13:47 /var/tmp/
$ ls -ld /var/tmp/portage/
drwxrwxr-x 5 portage portage 4096 Feb  5 12:32 /var/tmp/portage/
$


More likely seems to be the permissions of /usr/portage/:

$ ls -ld /usr/portage/
drwxr-xr-x 167 portage portage 4096 Jan  5 02:31 /usr/portage/
$ ls -ld /usr/portage/app-accessibility/caribou/caribou*.ebuild
-rw-r--r-- 1 portage portage 2432 Aug 25 23:11 
/usr/portage/app-accessibility/caribou/caribou-0.4.12.ebuild
-rw-r--r-- 1 portage portage 2431 Dec  8 18:01 
/usr/portage/app-accessibility/caribou/caribou-0.4.13.ebuild
$

This would seem to allow portage itself to synchronise the Portage tree, but 
not members of the portage group.


I am able to run `emerge --sync` as root, but it doesn't solve the solve the 
problem - next time I run `eix-sync` as user, I'm permissions denied, again.

Shouldn't a sync reset the permissions of the portage tree to be correct?


`emerge --info | grep -i feature` shows that FEATURES=userfetch userpriv 
usersandbox usersync (and some others - see attached) are set.

I can reproduce this on a second AMD64 machine, both are running portage-2.2.7.

Thanks in advance for any help, advice or suggestions you can offer,


This should work:-

PORTAGE_RSYNC_EXTRA_OPTS=--chmod=g+w

--Kerin



Re: [gentoo-user] User eix-sync permissions problem

2014-02-10 Thread Kerin Millar

On 10/02/2014 19:29, Alan McKinnon wrote:

On 10/02/2014 21:03, Walter Dnes wrote:

On Mon, Feb 10, 2014 at 05:09:55PM +, Stroller wrote


On Mon, 10 February 2014, at 4:55 pm, Gleb Klochkovglebiu...@gmail.com  
wrote:


Hi. Try to use sudo with no password for eix-sync.


I'd really rather not. Thanks, though.


   Being in group portage is not enough.  That merely lets you do
emerges with --pretend.  emerge --sync modifies files in
/usr/portage.  Files and directories in /usr/portage/  are user:group
root:root.  Therefore you*NEED*  root-level permission to modify them.

Not quite, it's not a cut and dried as that. If root chowns the files to
a regular user, and that user then syncs, ownership remains with the
user (as a regular user can't chown stuff and the owner must remain the
user regardless of what the master tree reckons the owning uid is).

If the tree is then synced by root, well then all the problems come back


It won't cause any problems. The effect of usersync is defined as thus:

Drop privileges to the owner of PORTDIR for emerge(1).

Hence, emerge --sync run as root will execute rsync as the portage user, 
assuming that PORTDIR is owned by that very same user.


It can only be problematic if all of these conditions hold true:

* usersync is enabled (as it is by default)
* PORTDIR is owned by a non-root user
* The ownership is not consistent across PORTDIR and its children

As mentioned in a few other posts, recent snapshots are portage:portage 
throughout so it's a done deal for new installations. Those who still 
have it owned by root can benefit from usersync simply by running:


# chown -R portage:portage $(portageq envvar PORTDIR)

There is no subsequent requirement not to invoke emerge --sync as root.

--Kerin



Re: [gentoo-user] User eix-sync permissions problem

2014-02-10 Thread Kerin Millar

On 10/02/2014 23:57, Walter Dnes wrote:

On Mon, Feb 10, 2014 at 11:10:50PM +, Kerin Millar wrote


As mentioned in a few other posts, recent snapshots are portage:portage
throughout so it's a done deal for new installations.


   How recent?  Looking back into ~/Maildir/spam/cur/ I see that the
email file suffix changed from .d531:2,S to .i660:2,S on May 14th,
2013 (i.e. the current machine i660 was installed and pulling mail as
of that date).


I do not know but I would assume that the snapshots have been 
constructed in this fashion since (at least) the point where usersync 
became a default feature, which was in portage-2.1.13.





  Those who still have it owned by root can benefit from usersync
  simply by running:

# chown -R portage:portage $(portageq envvar PORTDIR)

There is no subsequent requirement not to invoke emerge --sync as root.


   What's the point, if you still have to run as root (or su or sudo) for
the emerge update process?



It's the principle of least privilege. Is there any specific reason for 
portage to fork and exec rsync as root? Is rsync sandboxed? Should rsync 
have unfettered read/write access to all mounted filesystems? Can it be 
guaranteed that rsync hasn't been compromised? Can it be guaranteed that 
PORTAGE_RSYNC_OPTS will contain safe options at all times?


The answer to all of these questions is no. Basically, the combination 
of usersync and non-root ownership of PORTDIR hardens the process in a 
sensible way while conferring no disadvantage.


--Kerin



Re: [gentoo-user] User eix-sync permissions problem

2014-02-10 Thread Kerin Millar

On 10/02/2014 20:30, Kerin Millar wrote:

On 10/02/2014 16:05, Stroller wrote:

Hello all,

I'm a little bit rusty, but my recollection is that I should be able
to perform `eix-sync` (or `emerge --sync`?) as a user to synchronise
my local copy of the portage tree with Gentoo's master portage tree.

User is in the portage group:

$ whoami
stroller
$ groups stroller
wheel audio video portage cron users
$

Yet I get these permissons denied errors:

$ eix-sync
  * Running emerge --sync

Synchronization of repository 'gentoo' located in '/usr/portage'...
Starting rsync with rsync://91.186.30.235/gentoo-portage...
Checking server timestamp …

…
receiving incremental file list
rsync: delete_file:
unlink(app-accessibility/caribou/caribou-0.4.12.ebuild) failed:
Permission denied (13)
rsync: delete_file:
unlink(app-accessibility/emacspeak/files/emacspeak-33.0-respect-ldflags.patch)
failed: Permission denied (13)
rsync: delete_file:
unlink(app-accessibility/emacspeak/files/emacspeak-33.0-greader-garbage.patch)
failed: Permission denied (13)

(full output attached)


Googling the problem I see a bunch of Gentoo Forums posts talking
about changing at random the permissions of /var/tmp/ or
/var/tmp/portage/, but no rationale is given, and I don't think this
is the cause:

$ emerge --info | grep -i tmpdir
PORTAGE_TMPDIR=/var/tmp
$ ls -ld /var/tmp/
drwxrwxrwt 3 root root 4096 Feb  5 13:47 /var/tmp/
$ ls -ld /var/tmp/portage/
drwxrwxr-x 5 portage portage 4096 Feb  5 12:32 /var/tmp/portage/
$


More likely seems to be the permissions of /usr/portage/:

$ ls -ld /usr/portage/
drwxr-xr-x 167 portage portage 4096 Jan  5 02:31 /usr/portage/
$ ls -ld /usr/portage/app-accessibility/caribou/caribou*.ebuild
-rw-r--r-- 1 portage portage 2432 Aug 25 23:11
/usr/portage/app-accessibility/caribou/caribou-0.4.12.ebuild
-rw-r--r-- 1 portage portage 2431 Dec  8 18:01
/usr/portage/app-accessibility/caribou/caribou-0.4.13.ebuild
$

This would seem to allow portage itself to synchronise the Portage
tree, but not members of the portage group.


I am able to run `emerge --sync` as root, but it doesn't solve the
solve the problem - next time I run `eix-sync` as user, I'm
permissions denied, again.

Shouldn't a sync reset the permissions of the portage tree to be correct?


`emerge --info | grep -i feature` shows that FEATURES=userfetch
userpriv usersandbox usersync (and some others - see attached) are set.

I can reproduce this on a second AMD64 machine, both are running
portage-2.2.7.

Thanks in advance for any help, advice or suggestions you can offer,


This should work:-

PORTAGE_RSYNC_EXTRA_OPTS=--chmod=g+w



Please excuse the reply-to-self but this issue piqued my interest and I 
think that I now have a better answer.


1) chown -R portage:portage $(portageq envvar PORTDIR)
2) find $(portageq envvar PORTDIR) -type f -exec chmod 0664 {} +
3) find $(portageq envvar PORTDIR) -type d -exec chmod 2775 {} +
4) Add to make.conf: PORTAGE_RSYNC_EXTRA_OPTS=--no-p --chmod=g+w
5) Sync as yourself thereafter (as root should work equally well!)

The reason for using --no-p is to prevent rsync from spewing errors 
about not being able to set the file modes when you sync as a regular 
user. These errors don't necessarily indicate that a file cannot be 
written - merely that the mode couldn't be set.


Such errors would occur because, though you are in the portage group, 
you are not necessarily the owner of the files that rsync is in the 
course of modifying. However, as long as the g+w bit is set for all 
newly created files/directories, I would posit that it doesn't actually 
matter. Instead, you can simply avoid synchronizing the permissions with 
the remote.


Finally, having the setgid bit set on directories ensures that files 
written out by your user beneath PORTDIR will always inherit the portage 
group rather than whatever your primary group happens to be.


I am still in the course of testing this out but I am fairly certain 
that it will work.


--Kerin



Re: [gentoo-user] User eix-sync permissions problem

2014-02-10 Thread Kerin Millar

On 11/02/2014 01:23, Walter Dnes wrote:

On Tue, Feb 11, 2014 at 12:28:43AM +, Kerin Millar wrote

On 10/02/2014 23:57, Walter Dnes wrote:


What's the point, if you still have to run as root (or su or sudo) for
the emerge update process?


It's the principle of least privilege. Is there any specific reason for
portage to fork and exec rsync as root? Is rsync sandboxed? Should rsync
have unfettered read/write access to all mounted filesystems? Can it be
guaranteed that rsync hasn't been compromised? Can it be guaranteed that
PORTAGE_RSYNC_OPTS will contain safe options at all times?

The answer to all of these questions is no. Basically, the combination
of usersync and non-root ownership of PORTDIR hardens the process in a
sensible way while conferring no disadvantage.


   If /usr/portage is owned by portage:portage, then wouldn't a user
(member of portage) be able to do mischief by tweaking ebuilds?  E.g.
modify an ebuild to point to a tarball located on a usb stick, at
http://127.0.0.1/media/sdc1/my_tarball.tgz.  This would allow a local
user to supply code that gets built and then installed in /usr/bin, or
/sbin, etc.


Yes, but only if the group write bit is set throughout PORTDIR. By 
default, rsync - as invoked by portage - preserves the permission bits 
from the remote and the files stored by the mirrors do not have this bit 
set.


What I have described elsewhere is a method for ensuring that the group 
write bit is set. In that case, your concern is justified; you would 
definitely not want to grant membership of the portage group to anyone 
that you couldn't trust in this context.


--Kerin



Re: [gentoo-user] Re: Portage performance dropped considerably

2014-01-29 Thread Kerin Millar

hasufell wrote:

snip


If we support disabling all useflags on package level (and we do),
then we support disabling all on global level as well. All
_unexpected_  breakage that occurs due to that are ebuild bugs that
have incorrect dependencies or missing REQUIRED_USE constraints.

Defaults are just a usability thing, nothing more.


Amen. The notion that a particular USE flag combination is 'wrong' for a 
package is nonsensical. The notion that a user is culpable for QA issues 
that are solely the preserve of the maintainer even the more so. Any 
such issues should either be fixed or the options afforded to the user 
redacted if the maintainer is unwilling or incapable of supporting them. 
Scolding those users who have the audacity to configure Gentoo to serve 
their requirements is a distraction, to put it politely.


--Kerin



Re: [gentoo-user] Gentoo on Dell PowerEdge R715/R720

2014-01-25 Thread Kerin Millar

Johann Schmitz wrote:

We use Dell servers exclusively and have for 15 years. I think we're up
to 400+ physical boxes now and the number of Linux-compatibility issues
in all that time is exactly zero :-)


That's good to hear.


If Dell sold server-class hardware that wasn't 100% supported in Linux,
their sales would suffer badly, they have a strong business model around
100% Linux support in the data center. So the odds are very much on your
side, but do your Google checks on the proposed hardware and verify.


You're right. But it's quite a difference to sell something which comes
with some Linux distribution installed and works and to have a system
in place which lacks a single but important feature just because it's
not one of the mainstream features.


1. Pick a server model that has an option to ship with RHEL
pre-installed, this indicates 100% Linux compatibility.


That was one of the reasons why we're thinking about Dell server. They
offer SLES and RHEL as operating systems.

But again, often the last 1% of the hardware's features (like hardware
sensors) are only available if you use some creepy vendor software (like
OpenManage).


That is not the case. I have some R720s and they expose no fewer than 
151 distinct sensor readings through IPMI. Simply install freeipmi and 
run ipmi-sensors. Other useful tools are ipmi-oem, whose Dell-specific 
extensions can be used to configure DRAC cards, and ipmi-sel, which can 
be used to read the System Event Log. I use a simple script to scrape 
the SEL and dispatch new entries in the form of an email alert.


--Kerin



Re: [gentoo-user] NAT problem

2014-01-10 Thread Kerin Millar

the wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello. This is the the first time I'm dealing with wifi and the second
time with NAT.
I have a server (access point) with a ppp0 interface (internet), eth0,
wlan0, tun0 and sit0. A dhcp server is listening on wlan0 and provides
local ip addresses, dns (= my isp dns) and router (= server wlan0 ip
address). Nat is configured on the server like this:
# Generated by iptables-save v1.4.20 on Fri Jan 10 21:34:26 2014
*raw
:PREROUTING ACCEPT [1000941:974106726]
:OUTPUT ACCEPT [775261:165606146]
COMMIT
# Completed on Fri Jan 10 21:34:26 2014
# Generated by iptables-save v1.4.20 on Fri Jan 10 21:34:26 2014
*nat
:PREROUTING ACCEPT [888:45008]
:INPUT ACCEPT [63:9590]
:OUTPUT ACCEPT [442:27137]
:POSTROUTING ACCEPT [36:1728]
- -A POSTROUTING -o ppp0 -j MASQUERADE
COMMIT
# Completed on Fri Jan 10 21:34:26 2014
# Generated by iptables-save v1.4.20 on Fri Jan 10 21:34:26 2014
*mangle
:PREROUTING ACCEPT [1000941:974106726]
:INPUT ACCEPT [951658:947497602]
:FORWARD ACCEPT [39262:26279024]
:OUTPUT ACCEPT [775261:165606146]
:POSTROUTING ACCEPT [814621:191890787]
COMMIT
# Completed on Fri Jan 10 21:34:26 2014
# Generated by iptables-save v1.4.20 on Fri Jan 10 21:34:26 2014
*filter
:INPUT ACCEPT [371:35432]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [33994:3725352]
- -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
- -A FORWARD -i wlan0 -o ppp0 -j ACCEPT
- -A FORWARD -i ppp0 -o wlan0 -j ACCEPT
- -A FORWARD -i eth0 -j DROP
- -A FORWARD -i tun0 -j DROP
COMMIT
# Completed on Fri Jan 10 21:34:26 2014
I have a client that connects to my wifi, obtains an address via dhcp
and ... can't acces almost all of internet sites.
I was able to ping any web service I could think of, but I was able to
use only google/youtube. I can do text/ image serches on google and
can open youtube(but videos aren't loading). On other services wget
says connection established, but it can't retrieve anything. if I ssh
to an external server (not my nat server) I can ls, but if I try to ls
- -alh I receive only a half of the files list and the terminal hangs
after that.
If I do $python -m http.server on my server I can do file transfers
and open html pages on my client. I have tried this
https://wiki.archlinux.org/index.php/Software_Access_Point#WLAN_is_very_slow

Also I have tried to insert LOG target in FORWARD of filter.
It showed that I send way more pakets(10) to a http server than I
receive(~2-4).
The client is fine and behaves normally with wifi, used it many times.
Thanks for your time.


It's probable that you need to make use of MSS clamping. Try the 
following rule:


iptables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -j 
TCPMSS --clamp-mss-to-pmtu


--Kerin



Re: [gentoo-user] Multiple package instances within a single package slot

2013-10-04 Thread Kerin Millar

On 04/10/2013 11:50, Alex Schuster wrote:

Hi there!

Some may remember me from posting here often. But since a year, I have a
new life, and much less time for sitting at my computer. Sigh. And my
beloved Gentoo got a little outdated.

So, a @world update does not work. I thought I give emerge -e @world a
try, this should sort out the problems, but this also does not go well.

I don't want to bother you with the whole lot of output emerge gives me,
and just ask a specific question at the moment. I get the 'Multiple
package instances within a single package slot have been pulled into the
dependency graph, resulting in a slot conflict' message, and several
affected packages. One example is claws:

mail-client/claws-mail:0

   (mail-client/claws-mail-3.9.0-r1::gentoo, ebuild scheduled for merge)
   pulled in by ~mail-client/claws-mail-3.9.0 required by
   (mail-client/claws-mail-address_keeper-1.0.7::gentoo, ebuild scheduled
   for merge)

   (mail-client/claws-mail-3.9.2::gentoo, ebuild scheduled for merge)
   pulled in by (no parents that aren't satisfied by other packages in
   this slot)

Looking at the ebuild, I see that claws-mail-address_keeper rdepends on
claws-mail-3.9.0. But being on ~amd86, 3.9.2 would be current.

I can solve this by masking versions greater than 3.9.0. Two questions:

Why can't portage deal with this itself, and simply install the highest
version that fulfills all requirements?


Your use of --emptytree makes it slightly harder to determine from the 
above output, because the conflict messages will not correctly 
distinguish merged (installed) packages from those that are yet to be 
merged.


Do you have mail-client/claws-mail-address_keeper in your world file? If 
so, that would mandate its installation as part of the @world set (no if 
or buts). In turn, that would exhibit a hard dependency on 
claws-mail-3.9.0, which obviously cannot co-exist with 3.9.2, even if 
you have unmasked it.


Try removing the entry from the world file if it's there, then seeing 
whether the conflict is handled any differently.




And how do I notice an update to claws-mail-address_keeper that would
allow a newer version of claws-mail? Other than remembering those masks
and go through them once in a while?


As of the 3.9.1 ebuild, there is a comment above the collection of 
blocks that states:


Plugins are all integrated or dropped since 3.9.1

Further, from the 3.9.1 release notes:

All plugins previously packaged as 'Extra Plugins' are now contained 
within the Claws Mail package.


Thus, it's possible that the address_keeper plugin has been folded into 
the core. In turn, that would explain why it must block the plugin as a 
separate package.




Similar problems happen with sys-boot/syslinux, pulled in by
sys-boot/unetbootin, media-sound/jack-audio-connection-kit, pulled in by
app-emulation/emul-linux-x86-soundlibs, and all dev-qt packages, where I
did not yet figure out what to do.

I am running portage 2.2.7.

 Alex





Re: [gentoo-user] OT: default route dependent on dest port?

2013-10-04 Thread Kerin Millar

On 04/10/2013 21:55, Grant Edwards wrote:

Let's posit two network interfaces net1 (192.168.x.y/16) and net2
(172.16.a.b/16).  There's a NAT/gateway available on each of the
networks. I want to use the 172.16 gateway for TCP connections to port
80 and the 192.168 gateway for everything else.

I'm primarily following this example:

   http://www.tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.netfilter.html

My main routing table contains all directly accessible subnets plus
a default route via the 192.168 gateway.

I created a second route table named pmain which is identical to
main except it has a different default route via the 172.16 gateway.

My ip rules are:

   0:  from all lookup local
   1:  from all fwmark 0x1 lookup pmain
   32766:  from all lookup main
   32767:  from all lookup default

I then add an iptables rule like this:

   iptables -A OUTPUT -t mangle -p tcp --dport 80 -j MARK --set-mark 1


It would help if you were to also supply the details of:

  * ip -f inet -o a s
  * ip route show table main
  * ip route show table pmain



Now all TCP packets destined for port 80 are sent to the 172.16
gateway, _but_ they're being sent with a 192.168 source address. The
TCP stack is apparently unaware of the advanced routing tricks and
thinks that the packets are going out via the 192.168 gateway.

IOW I've succesfully re-routed TCP _packets_ but not the TCP
_connection_.

How do I tell the TCP stack that it's supposed to use the 172.16
inteface/gateway for connections to port 80?


--Kerin



Re: [gentoo-user] which filesystem is more suitable for /var/tmp/portage?

2013-10-03 Thread Kerin Millar

On 18/09/2013 16:09, Alan McKinnon wrote:

On 18/09/2013 16:05, Peter Humphrey wrote:

On Wednesday 18 Sep 2013 14:52:30 Ralf Ramsauer wrote:


In my opinion, reiser is a bit outdated ...


What is the significance of its date? I use reiserfs on my Atom box for /var,
/var/cache/squid and /usr/portage, and on my workstation for /usr/portage  and
/home/prh/.VirtualBox. It's never given me any trouble at all.



Sooner or later, reiser is going to bitrot. The ReiserFS code itself
will not change, but everything around it and what it plugs into will
change. When that happens (not if - when), there is no-one to fix the
bug and you will find yourself up the creek sans paddle

An FS is not like a widget set, you can't really live with and
workaround any defects that develop. When an FS needs patching, it needs
patching, no ifs and buts. Reiser may nominally have a maintainer but in
real terms there is effectively no-one

Circumstances have caused ReiserFS to become a high-risk scenario and
even though it might perform faultlessly right now, continued use should
be evaluated in terms of that very real risk.


Another problem with ReiserFS is its intrinsic dependency on the BKL 
(big kernel lock). Aside from hampering scalability, it necessitated 
compromise when the time came to eliminate the BKL:


https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8ebc423

Note the performance loss introduced by the patch; whether that was 
addressed I do not know.


In my view, ReiserFS is only useful for saving space through tail 
packing. Unfortunately, tail packing makes it slower still (an issue 
that was supposed to be resolved for good in Reiser4).


In general, I would recommend ext4 or xfs as the go-to filesystems these 
days.


--Kerin



Re: [gentoo-user] which filesystem is more suitable for /var/tmp/portage?

2013-10-03 Thread Kerin Millar

On 03/10/2013 13:08, Volker Armin Hemmann wrote:

Am 03.10.2013 11:55, schrieb Kerin Millar:

On 18/09/2013 16:09, Alan McKinnon wrote:

On 18/09/2013 16:05, Peter Humphrey wrote:

On Wednesday 18 Sep 2013 14:52:30 Ralf Ramsauer wrote:


In my opinion, reiser is a bit outdated ...


What is the significance of its date? I use reiserfs on my Atom box
for /var,
/var/cache/squid and /usr/portage, and on my workstation for
/usr/portage  and
/home/prh/.VirtualBox. It's never given me any trouble at all.



Sooner or later, reiser is going to bitrot. The ReiserFS code itself
will not change, but everything around it and what it plugs into will
change. When that happens (not if - when), there is no-one to fix the
bug and you will find yourself up the creek sans paddle

An FS is not like a widget set, you can't really live with and
workaround any defects that develop. When an FS needs patching, it needs
patching, no ifs and buts. Reiser may nominally have a maintainer but in
real terms there is effectively no-one

Circumstances have caused ReiserFS to become a high-risk scenario and
even though it might perform faultlessly right now, continued use should
be evaluated in terms of that very real risk.


Another problem with ReiserFS is its intrinsic dependency on the BKL
(big kernel lock). Aside from hampering scalability, it necessitated
compromise when the time came to eliminate the BKL:


and that one was solved when - 4-5 years ago?


Consider the manner in which the hard requirement on the BKL was 
removed, then objectively argue that its deep use of the specific 
properties of the BKL did not necessitate what would quite reasonably 
be described as a compromise.






https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8ebc423


Note the performance loss introduced by the patch; whether that was
addressed I do not know.

In my view, ReiserFS is only useful for saving space through tail
packing. Unfortunately, tail packing makes it slower still (an issue
that was supposed to be resolved for good in Reiser4).



why don't you mention that reiserfs used barriers by default - and ext3
did not. Just to look good at 'using defaults benchmarks' (like
phoronix)? I mean, if we are digging around in history and btrfs is
still broken in my regards...


Because none of this passive aggressive rhetoric would have had any 
meaningful context within the content of my previous post.




tmpfs is the filesystem of choice for /tmp or /var/tmp/portage.


--Kerin



Re: [gentoo-user] Where to put advanced routing configuration?

2013-10-03 Thread Kerin Millar

On 03/10/2013 20:27, Grant Edwards wrote:

Let's say you wanted to configure routing of TCP packets based on destination
port like in this example:

   http://www.tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.netfilter.html

[which contains a series of 'ip' and 'iptables' commands to get packets
destined for port 25 to use a specific gateway.]

How do do this the right way on a Gentoo system?

Based on reading http://www.gentoo.org/doc/en/home-router-howto.xml, I think
I've figured out how to do the iptables part: you enter the 'iptables'
commands by hand to get the iptables set up the way you want, then you do
this:

   # /etc/init.d/iptables save
   # rc-update add iptables default


The iptables runscript is ideal for persisting the rules. However, 
during the initial construction of a non-trivial ruleset, I prefer to 
write a script that adds the rules. An elegant way of doing this is to 
use iptables-restore with a heredoc. The method - and its advantages - 
are described in this document (section 3):


http://inai.de/documents/Perfect_Ruleset.pdf


What about the 'ip' commands required to set up the tables, routes, and
rules?  Do those go in a startup script somewhere? Does one just edit
/etc/iproute2/rt_tables by hand? One would assume route configuration belongs


I would use the files under /etc/iproute2 for their intended purpose and 
a postup() hook in conf.d/net for anything else. When the postup() 
function is entered, the IFACE variable is automatically set to the name 
of the interface that triggered the event. Anything that is valid bash 
can go there.



in /etc/conf.d/net -- I've read through the advanced networking stuff in the
handbook, but it's not apparent to me where those 'ip' command belong.




Re: [gentoo-user] KVM on AMD ... why doesn't it just work?

2013-10-02 Thread Kerin Millar

On 02/10/2013 13:27, Stefan G. Weichinger wrote:


I try to set up KVM/QEMU on that new and shiny AMD server

24 cores of Opteron:

processor   : 23
vendor_id   : AuthenticAMD
cpu family  : 21
model   : 2
model name  : AMD Opteron(tm) Processor 6344

nice

I want gentoo-sources-3.10.7 (stable and long term supported) or
3.10.7-r1 (not yet on the machine).

I don't get /dev/kvm :-(

Right now I want this as a module kvm_amd or so.

# zgrep -i kvm /proc/config.gz
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM=m
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m

I compile the kernel with Canek's kerninst-script (see other thread).

kvm_amd does not load and gives me dozens of:

# dmesg | grep kvm  | head
[   13.460423] kvm_amd: Unknown symbol kvm_rdpmc (err 0)
[   13.460429] kvm_amd: Unknown symbol kvm_read_guest_page (err 0)
[   13.460437] kvm_amd: Unknown symbol kvm_requeue_exception (err 0)
[   13.460439] kvm_amd: Unknown symbol kvm_exit (err 0)
[   13.460441] kvm_amd: Unknown symbol kvm_init (err 0)
[   13.460444] kvm_amd: Unknown symbol kvm_enable_efer_bits (err 0)
[   13.460448] kvm_amd: Unknown symbol kvm_fast_pio_out (err 0)
[   13.460453] kvm_amd: Unknown symbol gfn_to_page (err 0)


Run modinfo kvm kvm-amd and check for discrepancies between the 
details of the two, especially regarding the vermagic field.




*sigh*

With the kernel booted I now do:

cd /usr/src/linux
make clean modules modules_install


I would suggest make mrproper to clean the source tree. Ensure that 
the .config has been backed up because it will be deleted as a result.




and see if I can load the module(s) then.

In general I am rather disappointed by the performance of this server. I
expected way more *bang* for the bucks ...

still unsure if that clocksource-topic might be relevant or if my kernel
config is somehow stupid.

Stefan


--Kerin



Re: [gentoo-user] KVM on AMD ... why doesn't it just work?

2013-10-02 Thread Kerin Millar

On 02/10/2013 17:47, Stefan G. Weichinger wrote:

Am 02.10.2013 18:31, schrieb Stefan G. Weichinger:

Am 02.10.2013 15:54, schrieb Kerin Millar:


Run modinfo kvm kvm-amd and check for discrepancies between the
details of the two, especially regarding the vermagic field.


[...]


I would suggest make mrproper to clean the source tree. Ensure that
the .config has been backed up because it will be deleted as a result.


I had no module kvm because one was built as module and one into the
kernel. I now did mrproper and recompile both kvm and kvm_amd into the
kernel. The next reboot will show ...


Rather simple issue:

I have a kernel in boot with the suffix -safe ... and this one got
listed first in grub.conf.

kerninst found that one and always set that one as default so my various
recompilings never changed the kernel itself but only the modules ...
which lead to strange mismatches ...


You could customize EXTRAVERSION for your safe kernel to prevent this 
from happening. It can be modified in the Makefile or - if I recall 
correctly - by creating a file named localversion-safe in the root of 
the source tree and enabling CONFIG_LOCALVERSION. That way, its module 
directory would be distinct.




Now I am on a clean and mrproper 3.10.7-r1 ... WITH /dev/kvm !

Thanks!

And the performance of the first VM looks good ... even with the vmdk
... conversion will follow.

Stefan






Re: [gentoo-user] HA-Proxy or iptables?

2013-08-30 Thread Kerin Millar

On 29/08/2013 08:54, Pandu Poluan wrote:

Hello list!

Here's my scenario:

Currently there is a server performing 2 functions; one runs on, let's
say, port 2000, and another one runs on port 3000.

Due to some necessary changes, especially the need to (1) provide more
resource for a function, and (2) delegate management of the functions
to different teams, we are going to split the server into two.

The problem is: Many users -- spread among 80+ branches throughout the
country -- access the server using IP Address instead of DNS name.

So, my plan was to leave port 2000's application on the original
server, implement port 3000's application on a new server, and have
all access to port 3000 of the original server to be redirected to
same port on the new server.

I can implement this using iptables SNAT  DNAT ... or I can use HA-Proxy.

Can anyone provide some benefit / drawback analysis on either solution?


I don't have any practical experience of using HA-Proxy. However, if you 
are sizing up Netfilter as a solution then I would suggest that you also 
consider Linux Virtual Server (LVS). It provides a lightweight NAT 
implementation and scales well. It is natively administered with the 
ipvsadm tool but I would recommend using ldirectord or such:


http://horms.net/projects/ldirectord/

--Kerin



Re: [gentoo-user] How to determine 'Least Common Denominator' between Xen(Server) Hosts

2013-08-15 Thread Kerin Millar

On 14/08/2013 13:15, Bruce Hill wrote:

On Wed, Aug 14, 2013 at 12:18:41PM +0700, Pandu Poluan wrote:

Hello list!

My company has 2 HP DL585 G5 servers and 5 Dell R... something servers. All
using AMD processors. They currently are acting as XenServer hosts.

How do I determine the 'least common denominator' for Gentoo VMs (running
as XenServer guests), especially for gcc flags?

I know that the (theoretical) best performance is to use -march=native ,
but since the processors of the HP servers are not exactly the same as the
Dell's, I'm concerned that compiling with -march=native will render the VMs
unable to migrate between the different hosts.


A couple of points:

* The effect of setting -march=native depends on the characteristics of
  the CPU (be it virtual or otherwise)
* The characteristics of the vCPU are defined by qemu's -cpu parameter
* qemu can emulate features not implemented by the host CPU (at a cost)

One way to go about it is to start qemu with a -cpu model that exposes 
features that all of your host CPUs have in common (or a subset 
thereof). In that case, -march=native is fine because all of the 
features that it detects as being available will be supported in 
hardware on the host side.


Another way is to expose the host CPU fully with -cpu host and to 
define your guest CFLAGS according to the most optimal subset. If you 
are looking for a 'perfect' configuration then this this would be the 
most effective method, if applied correctly.


Irrespective of the method, by examining /proc/cpuinfo and using the 
diff technique mentioned by Bruce, you should be able to determine the 
optimal configuration.


Finally, in cases where the host CPUs differ significantly - in that 
native would imply a different -march value - you may choose to augment 
your CFLAGS with -mtune=generic to even out performance across the 
board. I don't think this would apply to you though.




Note: Yes I know the HP servers are much older than the Dell ones, but if I
go -march=native then perform an emerge when the guest is on the Dell host,
the guest VM might not be able to migrate to the older HPs.


To check what options CFLAGS set as -march=native would use:
gcc -march=native -E -v - /dev/null 21 | sed -n 's/.* -v - //p'
(the first thing in the output is what CPU -march=native would enable)

Then you can run:
diff -u (gcc -Q --help=target) (gcc -march=native -Q --help=target)
to display target-specific options, versus native ones.
Assuming the 2 HP servers are identical, and the 5 Dell servers are identical,
you then only need to get the commonality of two processors (HP and Dell).
Since they're both AMD, you should have a good set of common features to help
you determine that least common denominator, or target.





Re: [gentoo-user] How to determine 'Least Common Denominator' between Xen(Server) Hosts

2013-08-15 Thread Kerin Millar

On 14/08/2013 16:23, Paul Hartman wrote:


On Wed, Aug 14, 2013 at 12:18 AM, Pandu Poluan pa...@poluan.info
mailto:pa...@poluan.info wrote:

I know that the (theoretical) best performance is to use
-march=native , but since the processors of the HP servers are not
exactly the same as the Dell's, I'm concerned that compiling with
-march=native will render the VMs unable to migrate between the
different hosts.



I use -mtune=native rather than -march=native, that way I can use some
advanced processor features if they are available, but my system will
still run if moved to a different host.


That's not how -mtune works. If -march is unspecified, it will default 
to the lowest common denominator for the platform which prevents the use 
of any distinguished processor features. For an amd64 install, that 
would be -march=x86-64.


Instead, -mtune affects everything that -march doesn't. Though it 
doesn't affect the instructions that *can* be used, it may effect which 
of the allowed instructions are used and how. For instance, gcc includes 
processor pipeline descriptions for different microarchitectures so as 
to emit instructions in a way that tries to avoid pipeline hazards:


http://gcc.gnu.org/onlinedocs/gccint/Processor-pipeline-description.html

If performance matters, a better approach is to look at what 
-march=native enables and manually specify all options that are common 
between the hosts. Further, if the host CPU microarchitecture varies 
then I would suggest adding -mtune=generic so as not to make potentially 
erroneous assumptions in the course of applying that type of 
optimisation. Indeed, -mtune=generic is the default if neither -march 
nor -mtune are specified.


Regarding qemu, the main thing is never to use a feature that would 
incur costly emulation on the host side.


--Kerin



Re: [gentoo-user] export LC_CTYPE=en_US.UTF-8

2013-08-07 Thread Kerin Millar

On 06/08/2013 23:42, Stroller wrote:


On 6 August 2013, at 14:04, Kerin Millar wrote:

...
If undefined, the value of LC_COLLATE is inherited from LANG. I'm not sure that 
overriding it is particularly useful nowadays but it doesn't hurt.


It's been a couple of years since I looked into this, but I'm given to believe 
that LANG should set all LC_ variables correctly, and that overriding them is 
frowned upon.


As has been mentioned, there are valid reasons to want to override the 
collation. Here is a concrete example:


https://lists.gnu.org/archive/html/bug-gnu-utils/2003-08/msg00537.html

Strictly speaking, grep is correct to behave that way but it can be 
confounding. In an ideal world, everyone would be using named classes 
instead of ranges in their regular expressions but it's not an ideal world.


These days, grep no longer exhibits this characteristic in Gentoo. 
Nevertheless, it serves as a valid example of how collations for UTF-8 
locales can be a liability.


Of the other distros, Arch Linux also defined LC_COLLATE=C although I 
understand that they have just recently stopped doing that.


On a production system, I would still be inclined to use it for reasons 
of safety. For that matter, some people refuse to use UTF-8 at all on 
the grounds of security; the handling of variable-width encodings 
continues to be an effective bug inducer.



I had to do this myself because, due to a bug, the en_GB time formatting failed 
to display am or pm. I believe this should be fixed now.


Presumably:

a) LANG was defined inappropriately
b) LANG was defined appropriately but LC_TIME was defined otherwise
c) LC_ALL was defined, trumping all

I would definitely not advise doing any of these things.

--Kerin



Re: [gentoo-user] export LC_CTYPE=en_US.UTF-8

2013-08-07 Thread Kerin Millar

On 07/08/2013 17:40, Stroller wrote:


On 7 August 2013, at 13:41, Kerin Millar wrote:


On 06/08/2013 23:42, Stroller wrote:


On 6 August 2013, at 14:04, Kerin Millar wrote:

...
If undefined, the value of LC_COLLATE is inherited from LANG. I'm not sure that 
overriding it is particularly useful nowadays but it doesn't hurt.


It's been a couple of years since I looked into this, but I'm given to believe 
that LANG should set all LC_ variables correctly, and that overriding them is 
frowned upon.


As has been mentioned, there are valid reasons to want to override the 
collation. Here is a concrete example:

https://lists.gnu.org/archive/html/bug-gnu-utils/2003-08/msg00537.html

Strictly speaking, grep is correct to behave that way but it can be confounding.


Linking also this answer, which you're aware of:
https://lists.gnu.org/archive/html/bug-gnu-utils/2003-08/msg00600.html


Best practice will never be universally observed.



This only goes to illustrate that you shouldn't be going overriding these 
willy-nilly without full awareness of why you're doing so and what you're doing.


It also served to illustrate the overall point I was making - that 
sticking to the C/POSIX collation is not without value as a safety 
measure. Naturally, I would expect anyone else to exercise their own 
judgement.






I had to do this myself because, due to a bug, the en_GB time formatting failed 
to display am or pm. I believe this should be fixed now.


Presumably:

a) LANG was defined inappropriately
b) LANG was defined appropriately but LC_TIME was defined otherwise
c) LC_ALL was defined, trumping all



I'm having trouble parsing this reply, but perhaps you might find the full bug 
description helpful. I wrote about 1000 words on the subject there last year.

It is the top Google hit for en_gb am pm bug: 
http://sourceware.org/bugzilla/show_bug.cgi?id=3768


OK.

--Kerin



Re: [gentoo-user] export LC_CTYPE=en_US.UTF-8

2013-08-06 Thread Kerin Millar

On 05/08/2013 23:52, Chris Stankevitz wrote:

On Mon, Aug 5, 2013 at 11:53 AM, Mike Gilbert flop...@gentoo.org wrote:

The handbook documents setting a system-wide default locale. You
generally do this by setting the LANG variable in
/etc/conf.d/02locale.

http://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=1chap=8#doc_chap3_sect3


Mike,

Thank you for your help.  I attempted to follow these instructions and
ran into three problems.  Can you please confirm the fixes I employed
to deal with each of these issues:

1. The handbook suggests I should modify the file /etc/env.d/02locale,
but that file does not exist on my system.  RESOLUTION: create the
file


Run eselect locale, first with the list parameter and then the set 
parameter as appropriate. It's easier.




2. The handbook suggests I should add this line to
/etc/env.d/02locale: 'LANG=de_DE.UTF-8', but I do not speak the
language DE.  RESOLUTION: type instead 'LANG=en_US.UTF-8' to match
/etc/locale.gen


Legitimate locales are those installed with glibc. These can be shown 
with either eselect locale list or locale -a.




3. The handbook suggests that I should add this line to
/etc/env.d/02locale: 'LC_COLLATE=C', but I do not know if they are
again talking about the language DE.  RESOLUTION: I assumed
LC_COLLATE=C refers to english and added the line without
modification.


C refers to the POSIX locale [1].

Defining LC_COLLATE is a workaround for behaviour deeemed surprising to 
those otherwise unaware of the impact of collations. For example, files 
beginning with a dot might no longer appear at the top of a directory 
listing and ranges in regular expressions may be affected, depending on 
the extent to which a given program abides by the locale. Poorly written 
shell scripts that capture from ls (assuming a given order) might also 
be affected.


If undefined, the value of LC_COLLATE is inherited from LANG. I'm not 
sure that overriding it is particularly useful nowadays but it doesn't hurt.


--Kerin

[1] 
http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap07.html#tag_07_02




Re: [gentoo-user] Recommendation for CPU type in QEMU?

2013-08-06 Thread Kerin Millar

On 03/08/2013 15:55, Marc Joliet wrote:

Am Wed, 31 Jul 2013 13:12:01 +0100
schrieb Kerin Millar kerfra...@fastmail.co.uk:


On 31/07/2013 12:31, Marc Joliet wrote:

[snip]



There's also -cpu host, which simply passes your CPU through to the guest.
That's what I use for my 32 bit WinXP VM. You can use it if you don't mind not
being able to migrate your guest, but it sounds to me like you're doing this on
a desktop machine, so I suspect guest migration doesn't matter to you.



I thought the same until very recently but it's not the case. The -cpu
host feature exposes all feature bits supported by qemu. Those may
include features that aren't supported in hardware by the host CPU, in
which case qemu has to resort to (slow) emulation if they are used.

--Kerin


Just a follow up: the most authoritative answer I could find is this:

   http://thread.gmane.org/gmane.comp.emulators.kvm.devel/84227/focus=90541

Furthermore, the Linux KVM tuning page also defines -cpu host as I understand
it:

   http://www.linux-kvm.org/page/Tuning_KVM

 From the above I conclude that -cpu host should *not* activate CPU features
that the host CPU does not support.

Otherwise I could only find out the following:

- the Gentoo and Arch wikis both recommend -cpu host in conjunction with KVM
   (see, e.g., http://wiki.gentoo.org/wiki/QEMU/Options)
- in contrast, http://wiki.qemu.org/Features/CPUModels#-cpu_host_vs_-cpu_best
   seems to match your statement
- some guy on serverfault.com says this
   
(http://serverfault.com/questions/404195/kvm-which-cpu-features-make-vms-run-better):

   Qemu doesn't work in the same way many other hypervisors do. For starters, 
it
   can provide full emulation. That means you can run x86 code on an ARM
   processor, for example. When in KVM mode, as you're using it, it doesn't
   actually do that... the processor is exposed no matter what, but what is
   reported to the OS will be changed by the -cpu flag.

   If that's correct, -cpu host might mean different things when in KVM
   mode vs. when not. However I'm not going to blindly trust that statement.

How/where did you find out that -cpu host also exposes non-host CPU features?



I checked the code and you're right. I had obtained the information from 
the qemu wiki but can now only assume that the content was discussing 
the feature before its implementation became concrete. Lesson to self: 
don't believe everything one reads in wikis (even official ones).


--Kerin



Re: [gentoo-user] export LC_CTYPE=en_US.UTF-8

2013-08-06 Thread Kerin Millar

On 06/08/2013 14:24, Bruce Hill wrote:

On Tue, Aug 06, 2013 at 02:04:00PM +0100, Kerin Millar wrote:


Legitimate locales are those installed with glibc. These can be shown
with either eselect locale list or locale -a.


Having never used eselect with locales (AFAIR) before today.

Why does locale -a return utf8? I know UTF-8 is accepted as standard, utf8
is not but usually recognized, but want to understand why locale -a output
omits the standard, which is set on my systems, and differs from the others:

o@workstation ~ $ eselect locale list
Available targets for the LANG variable:
   [1]   C
   [2]   POSIX
   [3]   en_US.utf8
   [4]   en_US.UTF-8 *
   [ ]   (free form)
mingdao@workstation ~ $ locale -a
C
POSIX
en_US.utf8
mingdao@workstation ~ $ locale
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
LC_NUMERIC=en_US.UTF-8
LC_TIME=en_US.UTF-8
LC_COLLATE=C
LC_MONETARY=en_US.UTF-8
LC_MESSAGES=en_US.UTF-8
LC_PAPER=en_US.UTF-8
LC_NAME=en_US.UTF-8
LC_ADDRESS=en_US.UTF-8
LC_TELEPHONE=en_US.UTF-8
LC_MEASUREMENT=en_US.UTF-8
LC_IDENTIFICATION=en_US.UTF-8
LC_ALL=


Apparently, utf8 is the canonical representation in glibc (which 
provides the locale tool):


http://lists.debian.org/debian-glibc/2004/12/msg00028.html

That eselect enumerates the locale twice when the alternate form is 
specified in /etc/env.d/02locale could be considered as a minor bug.


--Kerin



Re: [gentoo-user] export LC_CTYPE=en_US.UTF-8

2013-08-06 Thread Kerin Millar

On 06/08/2013 15:26, Bruce Hill wrote:

On Tue, Aug 06, 2013 at 02:40:04PM +0100, Kerin Millar wrote:


Apparently, utf8 is the canonical representation in glibc (which
provides the locale tool):

http://lists.debian.org/debian-glibc/2004/12/msg00028.html

That eselect enumerates the locale twice when the alternate form is
specified in /etc/env.d/02locale could be considered as a minor bug.

--Kerin


RFC 3629 does not mention utf8, but I did see this notation in Wikipedia, and
yes, I understand that's not official:

Other descriptions that omit the hyphen or replace it with a space, such as
utf8 or UTF 8, are not accepted as correct by the governing standards.[14]
Despite this, most agents such as browsers can understand them, and so
standards intended to describe existing practice (such as HTML5) may
effectively require their recognition.

[14] http://www.ietf.org/rfc/rfc3629.txt


Internally, glibc may use whatever representation it pleases.


I was only mildly curious seeing utf8 show up, because on numberous occasions
in #gentoo on FreeNode there have been different reports of incorrect
characters displayed with utf8, then fixed with UTF-8. Having read RFC 3629, I
just made it a habit to always use the standard (UTF-8).


Probably due to buggy applications. According to a glibc maintainer, 
they should be using the nl_langinfo() function but some try to read the 
locale name itself. The response of both of these commands is the same:


# LC_ALL=en_US.UTF-8 locale -k LC_CTYPE | grep charmap
# LC_ALL=en_US.utf8  locale -k LC_CTYPE | grep charmap

Ergo, applications that use the correct interface will be informed that 
the character encoding is UTF-8, irrespective of the format of the 
locale name.


Given the above, sticking to the lang_territory.UTF-8 format seems 
wise.




Having read the remainder of the Debian ML thread you referenced, I have a
headache. Debian did that to me when I used it for ~3 months in 2003.  :-)

Cheers,
Bruce





Re: [gentoo-user] Re: Any .config for vbox gentoo guest

2013-08-02 Thread Kerin Millar

On 02/08/2013 01:13, walt wrote:

On 08/01/2013 03:08 PM, Kerin Millar wrote:

On 30/07/2013 22:04, walt wrote:

On 07/29/2013 06:29 PM, Harry Putnam wrote:

Can anyone post a .config for a 3.8.13 kernel that is known to work on
a vbox install of gentoo as guest.

Working on a fresh install but don't have gentoo running anywhere to
rob a .config from.


This one worked for me.



snip

This config is missing various options that would significantly enhance kernel 
performance in its capacity as a guest.

For core paravirtualization support:

CONFIG_PARAVIRT_GUEST
CONFIG_KVM_CLOCK
CONFIG_KVM_GUEST

For virtio support:

CONFIG_VIRTIO
CONFIG_VIRTIO_PCI
CONFIG_VIRTIO_BLK
CONFIG_SCSI_VIRTIO
CONFIG_VIRTIO_NET

For the scsi/net virtio drivers to work in the guest, qemu must be started with 
the appropriate options. Further details can be found here:

http://www.linux-kvm.org/page/Virtio


Thanks Kerin.  I don't know about Harry, but I'm no expert in virtualization.

Just to clarify:  I know that virtualbox and kvm are both extensions of qemu,
but not exactly identical to each other.  Would those same kernel options be
useful in virtualbox as well as kvm/qemu?


KVM allows for hardware-assisted virtualization via Intel VT-d or AMD-V 
extensions. Without KVM, qemu is terribly slow. Collectively, the kernel 
options that I mentioned would entail the use of KVM.


Regarding VirtualBox, it does support a virtio-net type ethernet adapter 
so you would certainly benefit from enabling CONFIG_VIRTIO_NET in a guest.


I'm not entirely certain as to where VirtualBox stands with regard to 
PVOPS support [1] but it would probably also help to enable 
CONFIG_PARAVIRT_GUEST (even though there are no directly applicable 
sub-options).


--Kerin

[1] 
http://www.slideshare.net/xen_com_mgr/the-sexy-world-of-linux-kernel-pvops-project




Re: [gentoo-user] Re: Any .config for vbox gentoo guest

2013-08-01 Thread Kerin Millar

On 30/07/2013 22:04, walt wrote:

On 07/29/2013 06:29 PM, Harry Putnam wrote:

Can anyone post a .config for a 3.8.13 kernel that is known to work on
a vbox install of gentoo as guest.

Working on a fresh install but don't have gentoo running anywhere to
rob a .config from.


This one worked for me.



snip

This config is missing various options that would significantly enhance 
kernel performance in its capacity as a guest.


For core paravirtualization support:

   CONFIG_PARAVIRT_GUEST
   CONFIG_KVM_CLOCK
   CONFIG_KVM_GUEST

For virtio support:

   CONFIG_VIRTIO
   CONFIG_VIRTIO_PCI
   CONFIG_VIRTIO_BLK
   CONFIG_SCSI_VIRTIO
   CONFIG_VIRTIO_NET

For the scsi/net virtio drivers to work in the guest, qemu must be 
started with the appropriate options. Further details can be found here:


http://www.linux-kvm.org/page/Virtio

--Kerin



Re: [gentoo-user] Recommendation for CPU type in QEMU?

2013-08-01 Thread Kerin Millar

On 01/08/2013 22:38, Walter Dnes wrote:

On Thu, Aug 01, 2013 at 08:41:56AM +0200, Michael Hampicke wrote


You can use march=native on your gentoo hosts, that's no problem, as
long as you don't use it on your guests. That's the hole idea of VMs:
being able to move the virtual machine to another machine, that might be
completely different in terms of hardware. The goal is, to be machine
independent.


   I want to clarify one item, so please pardon me if it looks like I'm
asking the same question over again.  Assume that I launch QEMU with
-cpu core2duo and set -march=native in the guest's make.conf.  My
understanding is that the gcc compiler on the guest will see a core2duo,
not the physical i5 cpu on my desktop.


That's correct. Try running this in the guest:

gcc -march=native -Q --help=target | grep march | awk '{print $2}'



   We may be looking at different ways of doing the same thing.  You're
suggesting -march=core2 in the guest's make.conf.  I'm suggesting
-march=native in the guest's make.conf, which would pick up the cpu
type that QEMU sets (cor2duo).  I'm trying to make things simpler, by
only having to specify the cpu type once, on the QEMU commandline, and
leaving gcc to adapt to the QEMU-specified cpu.


--Kerin




Re: [gentoo-user] Recommendation for CPU type in QEMU?

2013-07-31 Thread Kerin Millar

On 31/07/2013 11:11, Walter Dnes wrote:

   I'm looking at setting up 32-bit WINE to run a 32-bit Windows app.
Since I'm on a pure 64-bit (no multi-lib) machine, that doesn't exactly
work, which is why I'm looking at QEMU.  I need to run WINE in 32 bit
mode, on a 32-bit install in a VM.  Is a 64-bit virtual cpu type
recommended anyways?  Are the qemu and kvm cpu types faster/slower?
And what would they be listed as in the kernel .config?

   I'm not familiar with all the weird codenames for Intel's chips.
What's the hierarchy between Nehalem/Westmere/SandyBridge/Haswell ?
Here's the list of available types...

[i660][waltdnes][~/qemu] sudo /usr/bin/qemu-kvm -cpu help


Please provide the content of /proc/cpuinfo on the host.

--Kerin




Re: [gentoo-user] Recommendation for CPU type in QEMU?

2013-07-31 Thread Kerin Millar

On 31/07/2013 12:31, Marc Joliet wrote:

[snip]



There's also -cpu host, which simply passes your CPU through to the guest.
That's what I use for my 32 bit WinXP VM. You can use it if you don't mind not
being able to migrate your guest, but it sounds to me like you're doing this on
a desktop machine, so I suspect guest migration doesn't matter to you.



I thought the same until very recently but it's not the case. The -cpu 
host feature exposes all feature bits supported by qemu. Those may 
include features that aren't supported in hardware by the host CPU, in 
which case qemu has to resort to (slow) emulation if they are used.


--Kerin



Re: [gentoo-user] which VM do you recommend?

2013-07-30 Thread Kerin Millar

On 30/07/2013 11:36, Tanstaafl wrote:

On 2013-07-30 4:11 AM, Randolph Maaßen r.maasse...@gmail.com wrote:

It needs a couple of kernel modules to work, but emerge will promt to
you what it needs.


Side question...

I want to run the vmware tools on my gentoo VM (so the host can safely
power it down), but it also requires modules.

For security reasons I have never enabled modules on my servers, but...


It doesn't enhance security unless additional measures are taken (see 
below).




Is there a way to do this securely, so that *only* the necessary modules
could ever be loaded?


You can use gsecurity (which is in hardened-sources). With 
CONFIG_GRKERNSEC_MODSTOP enabled, you will be able to run:


# echo 1  /proc/sys/kernel/grsecurity/disable_modules

After that, no further modules can be loaded. However, you would also 
need to disable privileged I/O and the ability to write to /dev/kmem, 
both of which grsecurity also facilitates.


--Kerin



Re: [gentoo-user] QEMU setup questions

2013-07-25 Thread Kerin Millar

On 25/07/2013 09:54, Walter Dnes wrote:

On Thu, Jul 25, 2013 at 05:16:21AM +0100, Kerin Millar wrote



2) What vncviewer or vncconnect parameters do I use to get to the
qemu session?


Assuming both server and client are run locally, connecting to either
localhost:0 or localhost:5900 should work.


   Thanks.  That helped me to get it working.

   I stumbled over the solution to my final problem by accident.  When
booting off the install cd, you have 15 seconds to hit any key, or
else it'll try to boot off the hard drive.  Given that I haven't
installed yet, it'll try to boot off the blank pseudo hard drive, and
fail.  Pressing any key will stop the timer.  I prefer to type in...


Just as on a real PC you can issue Ctrl+Alt+Del to have it reboot again 
(without respawning qemu). Look for the three-finger-salute icon on the 
toolbar.




gentoo net.ifnames=0

...to give myself a predictable interface name, namely eth0.  The
timer stops after I hit the g, and I can take my time typing the rest
of the command.  If you don't mind whatever ifname udev generates, you
can simply hit enter.  The drill *FOR INSTALL ONLY* under vnc is like
so...

* open up 2 terminals side-by-each
* in one type the command like the following *BUT DO NOT HIT ENTER*
vncviewer localhost:2

* in the other terminal, type a command like so, and hit ENTER
qemu-system-i386 -vnc :2 -cpu qemu32 -m 3072 -hda sda.raw -cdrom installx86.iso 
-boot d

* *IMMEDIATELY* go over to the other terminal and hit ENTER to
   activate vncviewer

* *AS SOON AS VNCVIEWER POPS UP* either hit ENTER for default install
   parameters, or start typing your own install command.

* *DO NOT PANIC* when the vncviewer screen goes dark for several seconds
   as the install checks out its environment.

* One more minor annoyance and workaround... the initial vncviewer
   screen is a 720x400 xterm.  The install thinks it's in a 1024x768
   framebuffer, so you get the bottom and right edges of the output
   clipped.  As soon the penguin logo appears, you can close the xterm
   containing the vncviewer output, and open another vncviewer with the
   same command as the original.  This new copy senses the correct
   screensize and you can go on with your install.


That's odd. The VNC client should dynamically change the size of its 
window as appropriate.





   I still haven't figured out why only root can run qemu-kvm.  I
solved that problem with an entry in /etc/sudoers.d and it is
definitely faster.

   If I specify a video card type for the guest, then the driver has to
be emerged on the guest; is that correct?



That's correct if you want optimal performance in X.org. The best option 
is -vga vmware in conjunction with x11-drivers/xf86-video-vmware.


--Kerin



Re: [gentoo-user] QEMU setup questions

2013-07-25 Thread Kerin Millar

On 25/07/2013 10:26, J. Roeleveld wrote:

On Thu, July 25, 2013 11:17, Kerin Millar wrote:

On 25/07/2013 09:54, Walter Dnes wrote:

On Thu, Jul 25, 2013 at 05:16:21AM +0100, Kerin Millar wrote



   I stumbled over the solution to my final problem by accident.  When
booting off the install cd, you have 15 seconds to hit any key, or
else it'll try to boot off the hard drive.  Given that I haven't
installed yet, it'll try to boot off the blank pseudo hard drive, and
fail.  Pressing any key will stop the timer.  I prefer to type in...


Just as on a real PC you can issue Ctrl+Alt+Del to have it reboot again
(without respawning qemu). Look for the three-finger-salute icon on the
toolbar.


Which toolbar?


* One more minor annoyance and workaround... the initial vncviewer
   screen is a 720x400 xterm.  The install thinks it's in a 1024x768
   framebuffer, so you get the bottom and right edges of the output
   clipped.  As soon the penguin logo appears, you can close the xterm
   containing the vncviewer output, and open another vncviewer with the
   same command as the original.  This new copy senses the correct
   screensize and you can go on with your install.


That's odd. The VNC client should dynamically change the size of its
window as appropriate.


Not all VNC-clients do this.
Which do you use?


TightVNC but on Windows. I haven't used the Linux version in some time
and incorrectly assumed feature parity. Sorry about that. Regarding the
absence of a toolbar, pressing F8 should expose a menu with a
Ctrl-Alt-Del option.

--Kerin



Re: [gentoo-user] QEMU setup questions

2013-07-24 Thread Kerin Millar

On 24/07/2013 10:50, Walter Dnes wrote:

   So I emerged QEMU, which pulled in some dependancies.  Things are not
going well...

1) The following warning shows up in elog...


WARN: pretend
You have decided to compile your own SeaBIOS. This is not supported
by upstream unless you use their recommended toolchain (which you are
not).  If you are intending to use this build with QEMU, realize you
will not receive any support if you have compiled your own SeaBIOS.
Virtual machines subtly fail based on changes in SeaBIOS.

   I don't see any USE flags about this.
The binary flag is an IUSE default but you're preventing the flag from 
being enabled somehow.


2) I download a Gentoo install ISO, and create a 7 gig raw file sda.raw,
and start qemu-kvm

$ qemu-kvm -hda sda.raw -cdrom installx86.iso -boot d
Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied

but... but... but... I am a member of group kvm.
Check that KVM support is available in the kernel (CONFIG_KVM_INTEL or 
CONFIG_KVM_AMD as appropriate). Another thing to bear in mind is that 
older kernels are unable to load the module on an on-demand basis. I 
can't remember exactly in which version they changed that. You should 
end up with the following device node:


# ls -l /dev/kvm
crw-rw 1 root kvm 10, 232 Dec  2  2012 /dev/kvm



3) su to root and retry to start QEMU

# qemu-kvm -hda sda.raw -cdrom installx86.iso -boot d
qemu-system-x86_64: pci_add_option_rom: failed to find romfile pxe-e1000.rom
VNC server running on `127.0.0.1:5900'

So root has sufficient permission, but there's a problem with the BIOS,
possibly related to item 1) above

Note; I can run as regular user, either of the 2 commands...
$ qemu-system-i386 -hda sda.raw -cdrom installx86.iso -boot d
$ qemu-system-x86_64 -hda sda.raw -cdrom installx86.iso -boot d
This doesn't run into permission problem 2), but still runs into the
rom not found problem 3).

4) At least VNC is running.  I emerged tigervnc and tried it

$ vncviewer

TigerVNC Viewer 64-bit v1.2.0 (20130723)
Built on Jul 23 2013 at 21:36:16
Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt)
Seehttp://www.tigervnc.org  for information on TigerVNC.
X_ChangeGC: BadFont (invalid Font parameter) 0x0

...and the dialog box is all blanks, presumably because of the missing
fonts.
VNC servers and clients vary in their capabilities rather more than they 
ought to. I would suggest tightvnc as I've found that to work splendidly 
with qemu.


--Kerin



Re: [gentoo-user] QEMU setup questions

2013-07-24 Thread Kerin Millar

On 24/07/2013 14:16, Neil Bothwick wrote:

On Wed, 24 Jul 2013 11:57:41 +0100, Kerin Millar wrote:


WARN: pretend
You have decided to compile your own SeaBIOS. This is not supported
by upstream unless you use their recommended toolchain (which you are
not).  If you are intending to use this build with QEMU, realize you
will not receive any support if you have compiled your own SeaBIOS.
Virtual machines subtly fail based on changes in SeaBIOS.

I don't see any USE flags about this.



The binary flag is an IUSE default but you're preventing the flag
from being enabled somehow.


That's the sort of fun you get with USE=-* :P


Indeed. I'd advocate this as a safer alternative:

USE_ORDER=env:pkg:conf:pkginternal:repo:env.d

--Kerin



Re: [gentoo-user] fdisk: DOS/GPT

2013-07-24 Thread Kerin Millar

On 24/07/2013 12:25, Pavel Volkov wrote:

Is fdisk lying to me?


It would appear so. If you are fond of fdisk, I'd suggest using gdisk as 
an alternative for managing disks using GPT. At least, until such time 
as the support in fdisk can be considered mature.


--Kerin



Re: [gentoo-user] Portage elog messages about historical symlinks

2013-07-24 Thread Kerin Millar

On 24/07/2013 19:22, Mick wrote:

I am getting messages like the one below from portage every now and then.
Especially, about /var/run, but in this case about a different directory:

* Messages for package dev-libs/klibc-1.5.20:

  * One or more symlinks to directories have been preserved in order to
  * ensure that files installed via these symlinks remain accessible. This
  * indicates that the mentioned symlink(s) may be obsolete remnants of an
  * old install, and it may be appropriate to replace a given symlink with
  * the directory that it points to.
  *
  *  /usr/lib/klibc/include/asm
  *

Perhaps I am having a senior moment, but I am not clear what I should do
despite the friendly message.  This is the symlink in question:

ls -la /usr/lib/klibc/include/asm
lrwxrwxrwx 1 root root 7 Nov  8  2008 /usr/lib/klibc/include/asm - asm-x86


How am I supposed to *replace* it with the directory that it points to?



I would take it to mean that /usr/lib/klibc/include/asm should be a 
directory and contain the files that are currently residing in the 
asm-x86 directory. For example:


# rm asm
# mkdir asm
# mv -- asm-x86/* asm/
# rmdir asm-x86

--Kerin



  1   2   >