Re: [gentoo-user] Re: how to get rid of kernel modules?

2011-11-14 Thread waltdnes
On Sat, Nov 12, 2011 at 04:50:29AM +0200, Nikos Chantziaras wrote
 On 11/11/2011 08:14 AM, Pandu Poluan wrote:
 
  On Nov 11, 2011 11:02 AM, Nikos Chantziaras rea...@arcor.de
  mailto:rea...@arcor.de wrote:
 
  Isn't there a selection in make menuconfig asynchronous scsi scan (or
  something like that)?
 
 There is.  But scsi_wait_scan.ko will still be built.

  Thanks.  I'm always learning something new on this list.

-- 
Walter Dnes waltd...@waltdnes.org



Re: [gentoo-user] bash date puzzle

2011-11-14 Thread Neil Bothwick
On Mon, 14 Nov 2011 02:06:04 -0500, Philip Webb wrote:

 To convert a UNIX date to a human-readable version the command is :
 
   556: ~ date -d @1321251520
   Mon Nov 14 01:18:40 EST 2011
   
 I would like to create a Bash alias or function to do this,
 but can't get the Bash syntax right: it keeps telling me
 date: the argument `1321251520' lacks a leading `+';
 when using an option to specify date(s), any non-option
 argument must be a format string beginning with `+'
 Try `date --help' for more information.

It is difficult to say what is wrong with your alias as you haven't shown
it, but my guess is that is is introducing a space between the @ and the
timestamp, which gives exactly the error you get.


-- 
Neil Bothwick

When companies ship Styrofoam, what do they pack it in?


signature.asc
Description: PGP signature


Re: [gentoo-user] The LIGHTEST web server (just for serving files)?

2011-11-14 Thread J. Roeleveld
On Sat, November 12, 2011 2:11 pm, YoYo Siska wrote:
 On Sat, Nov 12, 2011 at 07:40:08PM +0700, Pandu Poluan wrote:
 During my drive home, something hit my brain: why not have the 'master'
 server share the distfiles dir via NFS?

 So, the question now becomes: what's the drawback/benefit of NFS-sharing
 vs
 HTTP-sharing? The scenario is back-end LAN at the office, thus, a
 trusted
 network by definition.

 NFS doesn't like when it looses connection to the server. The only
 problems I had ever with NFS were because I forgot to unmout it before a
 server restart or when I  took a computer (laptop) off to another
 network...

NFS-shares can work, but these need to be umounted before network goes.
If server goes, problems can occur there as well.
But that is true with any server/client filesharing. (CIFS/Samba, for
instance)

 Otherwise it works well, esp. when mounted ro on the clients, however
 for distfiles it might make sense to allow the clients download and save
 tarballs that are not there yet ;), though I never used it with many
 computer emerging/downloading same same stuff, so can't say if locking
 etc works correctly...

Locking works correctly, have had 5 machines share the same NFS-shared
distfiles and all downloading the source-files.

 And with NFS the clients won't duplicate the files in their own
 distfiles directories ;)

Big plus, for me :)

--
Joost




Re: [gentoo-user] The LIGHTEST web server (just for serving files)?

2011-11-14 Thread microcai

http://code.google.com/p/bashttpd/

run with systemd or xinetd



于 2011年11月14日 18:05, J. Roeleveld 写道:
 On Sat, November 12, 2011 2:11 pm, YoYo Siska wrote:
 On Sat, Nov 12, 2011 at 07:40:08PM +0700, Pandu Poluan wrote:
 During my drive home, something hit my brain: why not have the 'master'
 server share the distfiles dir via NFS?

 So, the question now becomes: what's the drawback/benefit of NFS-sharing
 vs
 HTTP-sharing? The scenario is back-end LAN at the office, thus, a
 trusted
 network by definition.

 NFS doesn't like when it looses connection to the server. The only
 problems I had ever with NFS were because I forgot to unmout it before a
 server restart or when I  took a computer (laptop) off to another
 network...
 
 NFS-shares can work, but these need to be umounted before network goes.
 If server goes, problems can occur there as well.
 But that is true with any server/client filesharing. (CIFS/Samba, for
 instance)
 
 Otherwise it works well, esp. when mounted ro on the clients, however
 for distfiles it might make sense to allow the clients download and save
 tarballs that are not there yet ;), though I never used it with many
 computer emerging/downloading same same stuff, so can't say if locking
 etc works correctly...
 
 Locking works correctly, have had 5 machines share the same NFS-shared
 distfiles and all downloading the source-files.
 
 And with NFS the clients won't duplicate the files in their own
 distfiles directories ;)
 
 Big plus, for me :)
 
 --
 Joost
 
 




signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] java java everywhere

2011-11-14 Thread Willie Wong
On Fri, Nov 11, 2011 at 08:07:43PM +0100, Florian Philipp wrote:
 To find out why portage wants the JDK, run `emerge -pv --depclean
 virtual/jdk`. Repeat until you find @world or something looking familiar.
 
 I bet you have LibreOffice installed with USE=java. There is an old
 thread from earlier this year which describes what functionality you
 loose when you deactivate that flag. All things considered, though, I
 think it will be faster to install a JDK than to re-emerge LibreOffice
 with USE=-java.
 
Heh, you guess wrong :p

I actually just have virtual/jdk in world (I can't remember why I
didn't use virtual/jre instead) because of some binaries-only,
non-portage stuff I run. 

What caught me off-guard was the fact that the upgrade did not offer
the -bin option. (Which, as I mentioned in a different branch of this
thread, you helped me figure out why.)

Thank you again for all the help, 

W
-- 
Willie W. Wong ww...@math.princeton.edu
Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire 
 et vice versa   ~~~  I. Newton



Re: [gentoo-user] bash date puzzle

2011-11-14 Thread Paul Colquhoun
On Mon, 14 Nov 2011 06:13:34 AM Philip Webb wrote:
 14 Neil Bothwick wrote:
  On Mon, 14 Nov 2011 02:06:04 -0500, Philip Webb wrote:
  To convert a UNIX date to a human-readable version the command is :
556: ~ date -d @1321251520
Mon Nov 14 01:18:40 EST 2011
  
  I would like to create a Bash alias or function to do this,
  but can't get the Bash syntax right: it keeps telling me
  date: the argument `1321251520' lacks a leading `+';
  
  It is difficult to say what is wrong with your alias
  as you haven't shown it
 
   alias th='date -d @$1'
 
 was the first try, then adding '+' /or '\' to escape '+' or '@'.
 I also tried a function along similar lines.
 
  but my guess is that is is introducing a space
  between @ and the timestamp, which gives exactly the error you get.
 
 No, no spaces.


Aliases don't take argument, you need a function for that.

Turning on shell debugging shows what is happening

$ set -x
+ set -x

$ alias th='date -d @$1'
+ alias 'th=date -d @$1'

$ th 1321251520
+ date -d @ 1321251520
date: the argument `1321251520' lacks a leading `+';
when using an option to specify date(s), any non-option
argument must be a format string beginning with `+'
Try `date --help' for more information.


  Now set a value to $1 and see what happenes...


$ set -- 'Ha Ha'
+ set -- 'Ha Ha'

$ th 1321251520
+ date -d @Ha Ha 1321251520
date: extra operand `1321251520'
Try `date --help' for more information.

$ set +x


All the 'alias' process does is simple text substitution.


-- 
Reverend Paul Colquhoun, ULC.http://andor.dropbear.id.au/~paulcol
 Before you criticize someone, you should walk a mile in their shoes.
Then, when you do, you'll be a mile away, and you'll have their shoes.




Re: [gentoo-user] Re: how to get rid of kernel modules?

2011-11-14 Thread Dale

waltd...@waltdnes.org wrote:

On Sat, Nov 12, 2011 at 04:50:29AM +0200, Nikos Chantziaras wrote

On 11/11/2011 08:14 AM, Pandu Poluan wrote:

On Nov 11, 2011 11:02 AM, Nikos Chantziarasrea...@arcor.de
mailto:rea...@arcor.de  wrote:

Isn't there a selection in make menuconfig asynchronous scsi scan (or
something like that)?

There is.  But scsi_wait_scan.ko will still be built.

   Thanks.  I'm always learning something new on this list.



Just to put more fertilizer on this weed.  I tried to get rid of this a 
long time ago.  It can't be done.  It will be there even if you edit the 
config directly.  It reminds me of the nutgrass in my garden.  It just 
keeps popping up.


Dale

:-)   :-)



Re: [gentoo-user] bash date puzzle

2011-11-14 Thread Willie Wong
On Mon, Nov 14, 2011 at 06:13:34AM -0500, Philip Webb wrote:
   alias th='date -d @$1'
 
 was the first try, then adding '+' /or '\' to escape '+' or '@'.
 I also tried a function along similar lines.
 

That is not how you use alias. 

What you want is to use a function. Replace the alias line by

function th { date -d @$1; }

in your bashrc you'l probably be ok. 

W
-- 
Willie W. Wong ww...@math.princeton.edu
Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire 
 et vice versa   ~~~  I. Newton



Re: [gentoo-user] bash date puzzle

2011-11-14 Thread Philip Webb
14 Neil Bothwick wrote:
 On Mon, 14 Nov 2011 02:06:04 -0500, Philip Webb wrote:
 To convert a UNIX date to a human-readable version the command is :
   556: ~ date -d @1321251520
   Mon Nov 14 01:18:40 EST 2011
 I would like to create a Bash alias or function to do this,
 but can't get the Bash syntax right: it keeps telling me
 date: the argument `1321251520' lacks a leading `+';
 It is difficult to say what is wrong with your alias
 as you haven't shown it

  alias th='date -d @$1'

was the first try, then adding '+' /or '\' to escape '+' or '@'.
I also tried a function along similar lines.

 but my guess is that is is introducing a space
 between @ and the timestamp, which gives exactly the error you get.

No, no spaces.

-- 
,,
SUPPORT ___//___,   Philip Webb
ELECTRIC   /] [] [] [] [] []|   Cities Centre, University of Toronto
TRANSIT`-O--O---'   purslowatchassdotutorontodotca




Re: [gentoo-user] bash date puzzle

2011-11-14 Thread Neil Bothwick
On Mon, 14 Nov 2011 06:13:34 -0500, Philip Webb wrote:

  It is difficult to say what is wrong with your alias
  as you haven't shown it  
 
   alias th='date -d @$1'
 
 was the first try, then adding '+' /or '\' to escape '+' or '@'.
 I also tried a function along similar lines.
 
  but my guess is that is is introducing a space
  between @ and the timestamp, which gives exactly the error you get.  
 
 No, no spaces.

You invoke it as 'alias argument', so there is a space between the alias
and the argument and this space is included when the alias is expanded.
Otherwise aliases like ll='ls -l' would not work as 'll /mnt' would be
expanded to 'ls -l/mnt'.


-- 
Neil Bothwick

My brain's in gear, neutral's a gear ain't it?


signature.asc
Description: PGP signature


Re: [gentoo-user] Display name and Wacom tablet

2011-11-14 Thread YoYo Siska
On Sun, Nov 13, 2011 at 04:07:42PM -0500, Daniel D Jones wrote:
 I have an Nvidia GeForce GTX 560 Ti running nvidia-drivers 275.09.07.  It 
 supports dual monitors via Twinview.  I have a Wacom Inspire3 6 x 8 Tablet.  
 The tablet is working but it covers the entire display across both monitors 
 and I'd like to restrict it to one monitor.
 
 This is supposed to be done via
 
 xsetwacom set Wacom Intuos3 6x8 pad MapToOutput VGA1
 
 VGA1 is supposed to be the name of the display you want to restrict it to, 
 and 
 that name is supposed to be available via xrandr.  xrandr gives me the 
 following output:
 
 ddjones@kushiel ~ $ xrandr
 xrandr: Failed to get size of gamma for output default
 Screen 0: minimum 2048 x 768, current 3360 x 1050, maximum 3360 x 1050
 default connected 3360x1050+0+0 0mm x 0mm
3360x1050  50.0* 
2048x768   51.0  
 
 
 ddjones@kushiel ~ $ xrandr --verbose
 xrandr: Failed to get size of gamma for output default
 Screen 0: minimum 2048 x 768, current 3360 x 1050, maximum 3360 x 1050
 default connected 3360x1050+0+0 (0x166) normal (normal) 0mm x 0mm
 Identifier: 0x165
 Timestamp:  13703
 Subpixel:   unknown
 Clones:
 CRTC:   0
 CRTCs:  0
 Transform:  1.00 0.00 0.00
 0.00 1.00 0.00
 0.00 0.00 1.00
filter: 
   3360x1050 (0x166)  176.4MHz *current
 h: width  3360 start0 end0 total 3360 skew0 clock   
 52.5KHz
 v: height 1050 start0 end0 total 1050   clock   50.0Hz
   2048x768 (0x167)   80.2MHz
 h: width  2048 start0 end0 total 2048 skew0 clock   
 39.2KHz
 v: height  768 start0 end0 total  768   clock   51.0Hz
 
 I've tried guessing at the display name, trying VGA, DVI and LVDS with 
 various 
 numbers appended but xsetwacom simply complains that the display does not 
 exist.
 
 I've also tried setting the Coordinate Transformation Matrix as described 
 here:
 
 http://sourceforge.net/apps/mediawiki/linuxwacom/index.php?title=Dual_and_Multi-
 Monitor_Set_Up#Dual_Monitors
 
 I can set the matrix via the xinput command and xinput list-props for the 
 device confirms that the matrix is set to the new value but it does not alter 
 the behaviour of the tablet - it still spans both displays.  I set and 
 confirmed the matrix for the pad, the eraser and the cursor.
 
 Any advice or suggestions on how to either either identify the display names 
 (or fix whatever issue causes xrandr not to display the info) or to otherwise 
 restrict the tablet to one monitor would be greatly appreciated.


Nvidia driver doesn't use the xrandr protocol when dealing with multiple
monitors... they have their own extension and they say that it is better
and that xrandr is bad, and they report through xrandr only one output
(monitor) that covers all nvidia outputs. Thhat's the 'default' output
in your xrandr output, it's widht is the sum of widhts of both monitors
etc..

They report the physicall layout of monitors through the xinerama
extension, but  I guess that wacom uses xrandr (xinerama only numbers
the display, it doesn't have names, reports only a subset of randr
information and is generally older...)

I guess your only chance is to find parameters in the wacom display that
allow you to restrict the tablet to a certain area by setting the
coordinates... (and that would ofcourse work only for a specific
resolution...)

yoyo




[gentoo-user] Re: Re: Another hardware thread

2011-11-14 Thread masterprometheus
Neil Bothwick wrote:

 On Sat, 12 Nov 2011 23:00:15 +0200, masterprometheus wrote:
 
 For AMD I'd recommend to go for a 960T :
 http://www.newegg.com/Product/Product.aspx?Item=N82E16819103995
 It's a 95W and as a Zosma it's actually a 6-core. Most of those (not
 all unfortunately) can be unlocked to a 6-core. Has Turbo 
functionality.
 
 That sounds like a poor gamble. A 3.0GHz CPu that I may be able to 
unlock
 to 6 cores for £20 less than a genuine 6 cores 3.2GHz 1090T. I either 
get
 slightly less for slightly less, or a lot less for slightly less :(
 

Well when I checked the 960T was $125 at Newegg and 1090T was $170. 
That's a $45 difference. With $30 of this you can get a good cooler like 
this one :
http://www.newegg.com/Product/Product.aspx?Item=N82E16835233082

Then overclock and beat the 1090T easily in most tasks (even with 4 
cores). 

But if you're not in a budget get the best you can. An Intel core i7 
2600K is a great choice if you can afford it. Already fast, easily 
overclockable and hyperthreading will help with media encoding etc.




Re: [gentoo-user] bash date puzzle

2011-11-14 Thread Philip Webb
14 Willie Wong wrote:
 On Mon, Nov 14, 2011 at 06:13:34AM -0500, Philip Webb wrote:
   alias th='date -d @$1'
 That is not how you use alias. 
 What you want is to use a function. Replace the alias line by
   function th { date -d @$1; }
 in your bashrc you'l probably be ok. 

That's what I thought I tried, but evidently not:

  function th { date -d @$1 ; }

does indeed work (with a space before ';').

Thanks for the various replies.

-- 
,,
SUPPORT ___//___,   Philip Webb
ELECTRIC   /] [] [] [] [] []|   Cities Centre, University of Toronto
TRANSIT`-O--O---'   purslowatchassdotutorontodotca




Re: [gentoo-user] The LIGHTEST web server (just for serving files)?

2011-11-14 Thread Michael Mol
Isn't there a kernelland HTTP server? ISTR seeing the option. I don't
know anything about it, though.

On Mon, Nov 14, 2011 at 5:10 AM, microcai micro...@fedoraproject.org wrote:

 http://code.google.com/p/bashttpd/

 run with systemd or xinetd



 于 2011年11月14日 18:05, J. Roeleveld 写道:
 On Sat, November 12, 2011 2:11 pm, YoYo Siska wrote:
 On Sat, Nov 12, 2011 at 07:40:08PM +0700, Pandu Poluan wrote:
 During my drive home, something hit my brain: why not have the 'master'
 server share the distfiles dir via NFS?

 So, the question now becomes: what's the drawback/benefit of NFS-sharing
 vs
 HTTP-sharing? The scenario is back-end LAN at the office, thus, a
 trusted
 network by definition.

 NFS doesn't like when it looses connection to the server. The only
 problems I had ever with NFS were because I forgot to unmout it before a
 server restart or when I  took a computer (laptop) off to another
 network...

 NFS-shares can work, but these need to be umounted before network goes.
 If server goes, problems can occur there as well.
 But that is true with any server/client filesharing. (CIFS/Samba, for
 instance)

 Otherwise it works well, esp. when mounted ro on the clients, however
 for distfiles it might make sense to allow the clients download and save
 tarballs that are not there yet ;), though I never used it with many
 computer emerging/downloading same same stuff, so can't say if locking
 etc works correctly...

 Locking works correctly, have had 5 machines share the same NFS-shared
 distfiles and all downloading the source-files.

 And with NFS the clients won't duplicate the files in their own
 distfiles directories ;)

 Big plus, for me :)

 --
 Joost








-- 
:wq



[gentoo-user] Re: The SIMPLEST web server to config (this time - just for serving video files) ?

2011-11-14 Thread Grant Edwards
On 2011-11-13, Mark Knecht markkne...@gmail.com wrote:

My main issue isn't really the lightest in terms of memory or CPU
 usage, but rather something that's VERY easy to setup the config so
 that I don't have to spend much time reading manuals.

Busybox httpd: Just start it in the directory from which you want to
serve files.

-- 
Grant Edwards   grant.b.edwardsYow! If I felt any more
  at   SOPHISTICATED I would DIE
  gmail.comof EMBARRASSMENT!




Re: [gentoo-user] bash date puzzle

2011-11-14 Thread Stroller

On 14 November 2011, at 07:06, Philip Webb wrote:

 To convert a UNIX date to a human-readable version the command is :
 
  556: ~ date -d @1321251520
  Mon Nov 14 01:18:40 EST 2011
 
 I would like to create a Bash alias or function to do this,
 but can't get the Bash syntax right: it keeps telling me
 date: the argument `1321251520' lacks a leading `+';

~ $ function foo {
 date -d @$1
 }
 ~ $ foo 1321251520
Mon Nov 14 06:18:40 GMT 2011
~ $ 

Copied and pasted literally from my terminal, which is why you see the  PS2 
continuation prompt on lines 2  3.

Stroller.





[gentoo-user] Re: net-libs/gnutls-2.10.5 fails to compile

2011-11-14 Thread walt

On 11/12/2011 08:30 PM, Justin Findlay wrote:

I can't get the package net-libs/gnutls-2.10.5 to emerge because of a
c++ linker error.  What can I do to fix this?



CXXFLAGS=-march=pentium3 -mtune=pentium3 -m32 -Os -fmessage-length=0
-pipe -fno-implicit-templates

  ^

I'm wondering about that no-implicit-templates.  What happens if you
delete it?






Re: [gentoo-user] The LIGHTEST web server (just for serving files)?

2011-11-14 Thread v_2e
  There is a very small web server called thttpd which is very
lightweight and lets start serving files very quickly.
  It runs on my home router machine with an old Pentium CPU and several
megabytes of RAM and seems to consume about 500 kb of it.

  Regards,
Vladimir

On Mon, 14 Nov 2011 09:36:22 -0500
Michael Mol mike...@gmail.com wrote:

 Isn't there a kernelland HTTP server? ISTR seeing the option. I don't
 know anything about it, though.
 
 On Mon, Nov 14, 2011 at 5:10 AM, microcai
 micro...@fedoraproject.org wrote:
 
  http://code.google.com/p/bashttpd/
 
  run with systemd or xinetd
 
 
 
  于 2011年11月14日 18:05, J. Roeleveld 写道:
  On Sat, November 12, 2011 2:11 pm, YoYo Siska wrote:
  On Sat, Nov 12, 2011 at 07:40:08PM +0700, Pandu Poluan wrote:
  During my drive home, something hit my brain: why not have the
  'master' server share the distfiles dir via NFS?
 
  So, the question now becomes: what's the drawback/benefit of
  NFS-sharing vs
  HTTP-sharing? The scenario is back-end LAN at the office, thus, a
  trusted
  network by definition.
 
  NFS doesn't like when it looses connection to the server. The only
  problems I had ever with NFS were because I forgot to unmout it
  before a server restart or when I  took a computer (laptop) off
  to another network...
 
  NFS-shares can work, but these need to be umounted before network
  goes. If server goes, problems can occur there as well.
  But that is true with any server/client filesharing. (CIFS/Samba,
  for instance)
 
  Otherwise it works well, esp. when mounted ro on the clients,
  however for distfiles it might make sense to allow the clients
  download and save tarballs that are not there yet ;), though I
  never used it with many computer emerging/downloading same same
  stuff, so can't say if locking etc works correctly...
 
  Locking works correctly, have had 5 machines share the same
  NFS-shared distfiles and all downloading the source-files.
 
  And with NFS the clients won't duplicate the files in their own
  distfiles directories ;)
 
  Big plus, for me :)
 
  --
  Joost
 
 
 
 
 
 
 
 
 -- 
 :wq
 
 


- 
 v...@ukr.net



Re: [gentoo-user] Re: net-libs/gnutls-2.10.5 fails to compile

2011-11-14 Thread Justin Findlay
On Mon, Nov 14, 2011 at 2:33 PM, walt w41...@gmail.com wrote:
 CXXFLAGS=-march=pentium3 -mtune=pentium3 -m32 -Os -fmessage-length=0
 -pipe -fno-implicit-templates

          ^

 I'm wondering about that no-implicit-templates.  What happens if you
 delete it?

I was somewhat surprised by this too.  I've been grepping through
inherited ebuilds and eclasses to try to find where
-fno-implicit-templates gets inserted into CXXFLAGS, but found
nothing.  Within the ebuild itself, the only files that contained this
flag were the Makefiles (not Makefile.??), temp/build.log (naturally,
but as a particle physicist I feel compelled to list the self
interaction for completeness), and temp/environment.  It seemed
unlikely that gentoo would have put that into the default environment,
and I found no references in /etc/env.d/, /etc/*bash*, /etc/skel/, or
even /etc/.

At this point I remembered that I was emerging through a chroot from
another distro, and indeed -fno-implicit-templates came from a custom
CXXFLAGS there.  Thanks for giving me the idea to track this down,
because removing the flag also removes the link error.


Justin



[gentoo-user] Firefox desktop icon missing on e17

2011-11-14 Thread Mick
This may apply to other desktops but the problem I report here happens on the 
e17 rev. 64957

Following the update to firefox-7.0.1 no icon for firefox shows up on the iBar. 
 
Looking at /home/michael/.local/share/applications/ I see a file mozilla-
firefox-3.6.desktop file - instead of firefox-7.0.1

Shouldn't this file have been updated?  How can I fix it?
-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Keyboard stopped working

2011-11-14 Thread Grant
I just finished updating my wife's laptop which hadn't been updated in
6-12 months.  Her laptop's hardware is identical to mine which is
always kept up-to-date and works great.  After rebooting, the keyboard
on her laptop doesn't work although the mouse does work.  The keyboard
works in grub and after booting to a LiveCD so it's not a hardware
problem.  Does anyone know what might have caused this?

- Grant



Re: [gentoo-user] Keyboard stopped working

2011-11-14 Thread Alex Schuster
Grant writes:

 I just finished updating my wife's laptop which hadn't been updated in
 6-12 months.  Her laptop's hardware is identical to mine which is
 always kept up-to-date and works great.  After rebooting, the keyboard
 on her laptop doesn't work although the mouse does work.  The keyboard
 works in grub and after booting to a LiveCD so it's not a hardware
 problem.  Does anyone know what might have caused this?

If it working in the text console, but not in X, this might come from an
xorg upgrade without re-merging xf86-input-keyboard. emerge
@x11-module-rebuild to fix this.

Wonko



Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Grant
 And if I pull, none of my backed-up systems are secure because anyone
 who breaks into the backup server has root read privileges on every
 backed-up system and will thereby gain full root privileges quickly.

 IMO that depends on whether you also backup the authentication-related
 files or not. Exclude them from backup, ensure different root passwords
 for all boxes, and now you can limit the infiltration.

 If you're pulling to the backup server, that backup server has to be
 able to log in to and read all files on the other servers. Including
 e.g. your swap partition and device files.

 What if I have each system save a copy of everything to be backed up
 from its own filesystem in a separate directory and change the
 ownership of everything in that directory so it can be read by an
 unprivileged backup user?  Then I could have the backup server pull
 that copy from each system without giving it root access to each
 system.  Can I somehow have the correct ownerships for the backup
 saved in a separate file for use during a restore?

 - Grant


 You could just as well use an NFS share with no_root_squash. It is
 really more a question of finding the right combination of tools to
 ensure proper separation of concern for server and client.

 In fact, I think we are intermixing three distinct problems:
 1. (Possible) limitations of rdiff-backup with regard to untrusted
 backup servers or clients.

The limitation is real unfortunately.  All backups created by
rdiff-backup more than a second ago can be deleted something like
this:

rdiff-backup --remove-older-than 1s backup@12.34.56.78::/path/to/backup

 2. The purely technical question which file transfer protocols protect
 against write access from backup server to backup client and backup
 client to older backups on the server.

rdiff-backup doesn't provide those sort of protections.  Do any file
transfer protocols?

 3. The more or less organisational question what level of protection
 backups need and how fast security breaks have to be detected.

I think backups should be very well protected and security breaks
should not have to be immediately detected.

- Grant


 I think push vs. pull is just a secondary concern with regard to the
 second question and has practically no relevance to the third one.

 Regards,
 Florian Philipp



Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Grant
 And if I pull, none of my backed-up systems are secure because anyone
 who breaks into the backup server has root read privileges on every
 backed-up system and will thereby gain full root privileges quickly.

 IMO that depends on whether you also backup the authentication-related
 files or not. Exclude them from backup, ensure different root passwords
 for all boxes, and now you can limit the infiltration.

 If you're pulling to the backup server, that backup server has to be
 able to log in to and read all files on the other servers. Including
 e.g. your swap partition and device files.

 What if I have each system save a copy of everything to be backed up
 from its own filesystem in a separate directory and change the
 ownership of everything in that directory so it can be read by an
 unprivileged backup user?

 You've just reinvented the push backup =)

 If separate-directory is on the same server, an attacker can log in and
 overwrite all of your files with zeros. Those zeros will be pulled to
 the backup server, destroying your backups.

That's not the case at all.  The zeros would be pulled to the backup
server via rdiff-backup and saved as a new version in the repository.
The backups would be safe.

- Grant


 If separate-directory is on the backup server...



Re: [gentoo-user] Keyboard stopped working

2011-11-14 Thread Grant
 I just finished updating my wife's laptop which hadn't been updated in
 6-12 months.  Her laptop's hardware is identical to mine which is
 always kept up-to-date and works great.  After rebooting, the keyboard
 on her laptop doesn't work although the mouse does work.  The keyboard
 works in grub and after booting to a LiveCD so it's not a hardware
 problem.  Does anyone know what might have caused this?

 If it working in the text console, but not in X, this might come from an
 xorg upgrade without re-merging xf86-input-keyboard. emerge
 @x11-module-rebuild to fix this.

Right on the money, that got it, thank you.

- Grant



Re: [gentoo-user] Keyboard stopped working

2011-11-14 Thread Hung Dang
You might need to remerge xorg11 driver. Try below command then reboot X 
to see if it fix your problem


emerge `qlist -I -C x11-drivers/`

Hung

On 11/14/11 19:25, Grant wrote:

I just finished updating my wife's laptop which hadn't been updated in
6-12 months.  Her laptop's hardware is identical to mine which is
always kept up-to-date and works great.  After rebooting, the keyboard
on her laptop doesn't work although the mouse does work.  The keyboard
works in grub and after booting to a LiveCD so it's not a hardware
problem.  Does anyone know what might have caused this?

- Grant






Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Michael Mol
It's out of scope for file transfer protocols; it's a daemon/system-local
problem.  Attach pre-event or post-event scripts serverside for any special
munging or protections you'd like to apply. (Such as triggering an LVM
snapshot, for example...)

(sorry for the top post; in-line can be done in this client, but it's more
cumbersome than I have time for atm...)

ZZ
On Nov 14, 2011 7:45 PM, Grant emailgr...@gmail.com wrote:

  And if I pull, none of my backed-up systems are secure because anyone
  who breaks into the backup server has root read privileges on every
  backed-up system and will thereby gain full root privileges
 quickly.
 
  IMO that depends on whether you also backup the authentication-related
  files or not. Exclude them from backup, ensure different root
 passwords
  for all boxes, and now you can limit the infiltration.
 
  If you're pulling to the backup server, that backup server has to be
  able to log in to and read all files on the other servers. Including
  e.g. your swap partition and device files.
 
  What if I have each system save a copy of everything to be backed up
  from its own filesystem in a separate directory and change the
  ownership of everything in that directory so it can be read by an
  unprivileged backup user?  Then I could have the backup server pull
  that copy from each system without giving it root access to each
  system.  Can I somehow have the correct ownerships for the backup
  saved in a separate file for use during a restore?
 
  - Grant
 
 
  You could just as well use an NFS share with no_root_squash. It is
  really more a question of finding the right combination of tools to
  ensure proper separation of concern for server and client.
 
  In fact, I think we are intermixing three distinct problems:
  1. (Possible) limitations of rdiff-backup with regard to untrusted
  backup servers or clients.

 The limitation is real unfortunately.  All backups created by
 rdiff-backup more than a second ago can be deleted something like
 this:

 rdiff-backup --remove-older-than 1s backup@12.34.56.78::/path/to/backup

  2. The purely technical question which file transfer protocols protect
  against write access from backup server to backup client and backup
  client to older backups on the server.

 rdiff-backup doesn't provide those sort of protections.  Do any file
 transfer protocols?

  3. The more or less organisational question what level of protection
  backups need and how fast security breaks have to be detected.

 I think backups should be very well protected and security breaks
 should not have to be immediately detected.

 - Grant


  I think push vs. pull is just a secondary concern with regard to the
  second question and has practically no relevance to the third one.
 
  Regards,
  Florian Philipp




Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Grant
 It's out of scope for file transfer protocols; it's a daemon/system-local
 problem.  Attach pre-event or post-event scripts serverside for any special
 munging or protections you'd like to apply. (Such as triggering an LVM
 snapshot, for example...)

I must be going about this the wrong way.  Am I the only one using
automated backups?  If not, how is it done properly?

- Grant


  And if I pull, none of my backed-up systems are secure because
  anyone
  who breaks into the backup server has root read privileges on every
  backed-up system and will thereby gain full root privileges
  quickly.
 
  IMO that depends on whether you also backup the
  authentication-related
  files or not. Exclude them from backup, ensure different root
  passwords
  for all boxes, and now you can limit the infiltration.
 
  If you're pulling to the backup server, that backup server has to be
  able to log in to and read all files on the other servers. Including
  e.g. your swap partition and device files.
 
  What if I have each system save a copy of everything to be backed up
  from its own filesystem in a separate directory and change the
  ownership of everything in that directory so it can be read by an
  unprivileged backup user?  Then I could have the backup server pull
  that copy from each system without giving it root access to each
  system.  Can I somehow have the correct ownerships for the backup
  saved in a separate file for use during a restore?
 
  - Grant
 
 
  You could just as well use an NFS share with no_root_squash. It is
  really more a question of finding the right combination of tools to
  ensure proper separation of concern for server and client.
 
  In fact, I think we are intermixing three distinct problems:
  1. (Possible) limitations of rdiff-backup with regard to untrusted
  backup servers or clients.

 The limitation is real unfortunately.  All backups created by
 rdiff-backup more than a second ago can be deleted something like
 this:

 rdiff-backup --remove-older-than 1s backup@12.34.56.78::/path/to/backup

  2. The purely technical question which file transfer protocols protect
  against write access from backup server to backup client and backup
  client to older backups on the server.

 rdiff-backup doesn't provide those sort of protections.  Do any file
 transfer protocols?

  3. The more or less organisational question what level of protection
  backups need and how fast security breaks have to be detected.

 I think backups should be very well protected and security breaks
 should not have to be immediately detected.

 - Grant


  I think push vs. pull is just a secondary concern with regard to the
  second question and has practically no relevance to the third one.
 
  Regards,
  Florian Philipp



Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Grant
Then I could have the backup server pull
 that copy from each system without giving it root access to each
 system.  Can I somehow have the correct ownerships for the backup
 saved in a separate file for use during a restore?


 If you're intent on making a two-stage pull work; you can do it by
 creating a 'backups' user on your servers, and then using filesystem
 ACLs to grant backups+r to every file/directory you want to back up.
 That way, an attacker on the backup server can't decide to peruse the
 rest of your stuff.

I like that.  So use ACLs to grant access to the backups instead of
using ownership/permissions so that the ownership/permissions stay
intact.  I've never used ACLs.  Do they override
ownership/permissions?  In other words, if the ACL specifies backups+r
to a file owned by root that is chmod 700, backups can read it
anyway?

 The easiest method, though, is to just add a third stage. Either move
 the backups on the backup server to another directory after the backup
 job completes, or sync/burn/whatever them off-site. In this case the
 backup server can't access anything you don't give it, and the
 individual servers can't trash their backed-up data.

I don't see how that could work in an automated fashion.  Could you
give me an example?

- Grant



Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Michael Mol
You identified a flaw in the system as you were using it. You're right,
those are flaws. However, you can  fix those flaws by applying some magic
as a sysadmin. That's why several posts in the thread have mentioned
versioning your backups in some fashion. I've mentioned lvm a couple times.
I think someone else mentioned pulling the backup target's data to another
locale, either via a pull from another server, or via something like a
traditional incremental tape backup.

You're getting the data off the original machines to a remote location,
which is good. You identified a way the backed-up data could be tampered
with, which is good. You just need to put in another (better) barrier to
protect the data from being tampered with, or limit how much data is lost
in such an event.

ZZ
On Nov 14, 2011 8:21 PM, Grant emailgr...@gmail.com wrote:

  It's out of scope for file transfer protocols; it's a daemon/system-local
  problem.  Attach pre-event or post-event scripts serverside for any
 special
  munging or protections you'd like to apply. (Such as triggering an LVM
  snapshot, for example...)

 I must be going about this the wrong way.  Am I the only one using
 automated backups?  If not, how is it done properly?

 - Grant


   And if I pull, none of my backed-up systems are secure because
   anyone
   who breaks into the backup server has root read privileges on
 every
   backed-up system and will thereby gain full root privileges
   quickly.
  
   IMO that depends on whether you also backup the
   authentication-related
   files or not. Exclude them from backup, ensure different root
   passwords
   for all boxes, and now you can limit the infiltration.
  
   If you're pulling to the backup server, that backup server has to be
   able to log in to and read all files on the other servers. Including
   e.g. your swap partition and device files.
  
   What if I have each system save a copy of everything to be backed up
   from its own filesystem in a separate directory and change the
   ownership of everything in that directory so it can be read by an
   unprivileged backup user?  Then I could have the backup server pull
   that copy from each system without giving it root access to each
   system.  Can I somehow have the correct ownerships for the backup
   saved in a separate file for use during a restore?
  
   - Grant
  
  
   You could just as well use an NFS share with no_root_squash. It is
   really more a question of finding the right combination of tools to
   ensure proper separation of concern for server and client.
  
   In fact, I think we are intermixing three distinct problems:
   1. (Possible) limitations of rdiff-backup with regard to untrusted
   backup servers or clients.
 
  The limitation is real unfortunately.  All backups created by
  rdiff-backup more than a second ago can be deleted something like
  this:
 
  rdiff-backup --remove-older-than 1s backup@12.34.56.78::/path/to/backup
 
   2. The purely technical question which file transfer protocols protect
   against write access from backup server to backup client and backup
   client to older backups on the server.
 
  rdiff-backup doesn't provide those sort of protections.  Do any file
  transfer protocols?
 
   3. The more or less organisational question what level of protection
   backups need and how fast security breaks have to be detected.
 
  I think backups should be very well protected and security breaks
  should not have to be immediately detected.
 
  - Grant
 
 
   I think push vs. pull is just a secondary concern with regard to the
   second question and has practically no relevance to the third one.
  
   Regards,
   Florian Philipp




Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Grant
 You identified a flaw in the system as you were using it. You're right,
 those are flaws. However, you can  fix those flaws by applying some magic
 as a sysadmin. That's why several posts in the thread have mentioned
 versioning your backups in some fashion. I've mentioned lvm a couple times.

I thought versioning meant that you could roll back to a previous
version.  rdiff-backup provides that.

 I think someone else mentioned pulling the backup target's data to another
 locale, either via a pull from another server, or via something like a
 traditional incremental tape backup.

So the systems push to the backup server and a second backup server
pulls from the first backup server?  Should the second backup server
use rdiff-backup against the rdiff-backup repository on the first
backup server?  I think I've read that's not a good idea.

What does everybody else do?  I feel like the first person to ever
attempt secure automated backups.

- Grant


 You're getting the data off the original machines to a remote location,
 which is good. You identified a way the backed-up data could be tampered
 with, which is good. You just need to put in another (better) barrier to
 protect the data from being tampered with, or limit how much data is lost in
 such an event.

 ZZ

 On Nov 14, 2011 8:21 PM, Grant emailgr...@gmail.com wrote:

  It's out of scope for file transfer protocols; it's a
  daemon/system-local
  problem.  Attach pre-event or post-event scripts serverside for any
  special
  munging or protections you'd like to apply. (Such as triggering an LVM
  snapshot, for example...)

 I must be going about this the wrong way.  Am I the only one using
 automated backups?  If not, how is it done properly?

 - Grant


   And if I pull, none of my backed-up systems are secure because
   anyone
   who breaks into the backup server has root read privileges on
   every
   backed-up system and will thereby gain full root privileges
   quickly.
  
   IMO that depends on whether you also backup the
   authentication-related
   files or not. Exclude them from backup, ensure different root
   passwords
   for all boxes, and now you can limit the infiltration.
  
   If you're pulling to the backup server, that backup server has to
   be
   able to log in to and read all files on the other servers.
   Including
   e.g. your swap partition and device files.
  
   What if I have each system save a copy of everything to be backed up
   from its own filesystem in a separate directory and change the
   ownership of everything in that directory so it can be read by an
   unprivileged backup user?  Then I could have the backup server pull
   that copy from each system without giving it root access to each
   system.  Can I somehow have the correct ownerships for the backup
   saved in a separate file for use during a restore?
  
   - Grant
  
  
   You could just as well use an NFS share with no_root_squash. It is
   really more a question of finding the right combination of tools to
   ensure proper separation of concern for server and client.
  
   In fact, I think we are intermixing three distinct problems:
   1. (Possible) limitations of rdiff-backup with regard to untrusted
   backup servers or clients.
 
  The limitation is real unfortunately.  All backups created by
  rdiff-backup more than a second ago can be deleted something like
  this:
 
  rdiff-backup --remove-older-than 1s backup@12.34.56.78::/path/to/backup
 
   2. The purely technical question which file transfer protocols
   protect
   against write access from backup server to backup client and backup
   client to older backups on the server.
 
  rdiff-backup doesn't provide those sort of protections.  Do any file
  transfer protocols?
 
   3. The more or less organisational question what level of protection
   backups need and how fast security breaks have to be detected.
 
  I think backups should be very well protected and security breaks
  should not have to be immediately detected.
 
  - Grant
 
 
   I think push vs. pull is just a secondary concern with regard to the
   second question and has practically no relevance to the third one.
  
   Regards,
   Florian Philipp



Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Michael Mol
Sure, but you've noted that rdiff-backup is insecure if the source box is
violated. What you need, then, is a layer of versioning not subject to that
vulnerability.

ZZ
On Nov 14, 2011 9:34 PM, Grant emailgr...@gmail.com wrote:

  You identified a flaw in the system as you were using it. You're right,
  those are flaws. However, you can  fix those flaws by applying some
 magic
  as a sysadmin. That's why several posts in the thread have mentioned
  versioning your backups in some fashion. I've mentioned lvm a couple
 times.

 I thought versioning meant that you could roll back to a previous
 version.  rdiff-backup provides that.

  I think someone else mentioned pulling the backup target's data to
 another
  locale, either via a pull from another server, or via something like a
  traditional incremental tape backup.

 So the systems push to the backup server and a second backup server
 pulls from the first backup server?  Should the second backup server
 use rdiff-backup against the rdiff-backup repository on the first
 backup server?  I think I've read that's not a good idea.

 What does everybody else do?  I feel like the first person to ever
 attempt secure automated backups.

 - Grant


  You're getting the data off the original machines to a remote location,
  which is good. You identified a way the backed-up data could be tampered
  with, which is good. You just need to put in another (better) barrier to
  protect the data from being tampered with, or limit how much data is
 lost in
  such an event.
 
  ZZ
 
  On Nov 14, 2011 8:21 PM, Grant emailgr...@gmail.com wrote:
 
   It's out of scope for file transfer protocols; it's a
   daemon/system-local
   problem.  Attach pre-event or post-event scripts serverside for any
   special
   munging or protections you'd like to apply. (Such as triggering an LVM
   snapshot, for example...)
 
  I must be going about this the wrong way.  Am I the only one using
  automated backups?  If not, how is it done properly?
 
  - Grant
 
 
And if I pull, none of my backed-up systems are secure because
anyone
who breaks into the backup server has root read privileges on
every
backed-up system and will thereby gain full root privileges
quickly.
   
IMO that depends on whether you also backup the
authentication-related
files or not. Exclude them from backup, ensure different root
passwords
for all boxes, and now you can limit the infiltration.
   
If you're pulling to the backup server, that backup server has to
be
able to log in to and read all files on the other servers.
Including
e.g. your swap partition and device files.
   
What if I have each system save a copy of everything to be backed
 up
from its own filesystem in a separate directory and change the
ownership of everything in that directory so it can be read by an
unprivileged backup user?  Then I could have the backup server
 pull
that copy from each system without giving it root access to each
system.  Can I somehow have the correct ownerships for the backup
saved in a separate file for use during a restore?
   
- Grant
   
   
You could just as well use an NFS share with no_root_squash. It is
really more a question of finding the right combination of tools to
ensure proper separation of concern for server and client.
   
In fact, I think we are intermixing three distinct problems:
1. (Possible) limitations of rdiff-backup with regard to untrusted
backup servers or clients.
  
   The limitation is real unfortunately.  All backups created by
   rdiff-backup more than a second ago can be deleted something like
   this:
  
   rdiff-backup --remove-older-than 1s backup@12.34.56.78:
 :/path/to/backup
  
2. The purely technical question which file transfer protocols
protect
against write access from backup server to backup client and backup
client to older backups on the server.
  
   rdiff-backup doesn't provide those sort of protections.  Do any file
   transfer protocols?
  
3. The more or less organisational question what level of
 protection
backups need and how fast security breaks have to be detected.
  
   I think backups should be very well protected and security breaks
   should not have to be immediately detected.
  
   - Grant
  
  
I think push vs. pull is just a secondary concern with regard to
 the
second question and has practically no relevance to the third one.
   
Regards,
Florian Philipp




Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Grant
 Sure, but you've noted that rdiff-backup is insecure if the source box is
 violated. What you need, then, is a layer of versioning not subject to that
 vulnerability.

Does it exist?

- Grant


  You identified a flaw in the system as you were using it. You're right,
  those are flaws. However, you can  fix those flaws by applying some
  magic
  as a sysadmin. That's why several posts in the thread have mentioned
  versioning your backups in some fashion. I've mentioned lvm a couple
  times.

 I thought versioning meant that you could roll back to a previous
 version.  rdiff-backup provides that.

  I think someone else mentioned pulling the backup target's data to
  another
  locale, either via a pull from another server, or via something like a
  traditional incremental tape backup.

 So the systems push to the backup server and a second backup server
 pulls from the first backup server?  Should the second backup server
 use rdiff-backup against the rdiff-backup repository on the first
 backup server?  I think I've read that's not a good idea.

 What does everybody else do?  I feel like the first person to ever
 attempt secure automated backups.

 - Grant


  You're getting the data off the original machines to a remote location,
  which is good. You identified a way the backed-up data could be tampered
  with, which is good. You just need to put in another (better) barrier to
  protect the data from being tampered with, or limit how much data is
  lost in
  such an event.
 
  ZZ
 
  On Nov 14, 2011 8:21 PM, Grant emailgr...@gmail.com wrote:
 
   It's out of scope for file transfer protocols; it's a
   daemon/system-local
   problem.  Attach pre-event or post-event scripts serverside for any
   special
   munging or protections you'd like to apply. (Such as triggering an
   LVM
   snapshot, for example...)
 
  I must be going about this the wrong way.  Am I the only one using
  automated backups?  If not, how is it done properly?
 
  - Grant
 
 
And if I pull, none of my backed-up systems are secure because
anyone
who breaks into the backup server has root read privileges on
every
backed-up system and will thereby gain full root privileges
quickly.
   
IMO that depends on whether you also backup the
authentication-related
files or not. Exclude them from backup, ensure different root
passwords
for all boxes, and now you can limit the infiltration.
   
If you're pulling to the backup server, that backup server has
to
be
able to log in to and read all files on the other servers.
Including
e.g. your swap partition and device files.
   
What if I have each system save a copy of everything to be backed
up
from its own filesystem in a separate directory and change the
ownership of everything in that directory so it can be read by an
unprivileged backup user?  Then I could have the backup server
pull
that copy from each system without giving it root access to each
system.  Can I somehow have the correct ownerships for the backup
saved in a separate file for use during a restore?
   
- Grant
   
   
You could just as well use an NFS share with no_root_squash. It is
really more a question of finding the right combination of tools
to
ensure proper separation of concern for server and client.
   
In fact, I think we are intermixing three distinct problems:
1. (Possible) limitations of rdiff-backup with regard to untrusted
backup servers or clients.
  
   The limitation is real unfortunately.  All backups created by
   rdiff-backup more than a second ago can be deleted something like
   this:
  
   rdiff-backup --remove-older-than 1s
   backup@12.34.56.78::/path/to/backup
  
2. The purely technical question which file transfer protocols
protect
against write access from backup server to backup client and
backup
client to older backups on the server.
  
   rdiff-backup doesn't provide those sort of protections.  Do any file
   transfer protocols?
  
3. The more or less organisational question what level of
protection
backups need and how fast security breaks have to be detected.
  
   I think backups should be very well protected and security breaks
   should not have to be immediately detected.
  
   - Grant
  
  
I think push vs. pull is just a secondary concern with regard to
the
second question and has practically no relevance to the third one.
   
Regards,
Florian Philipp



Re: [gentoo-user] XEmacs from Outer Space...is it Plan 9 or what ? ;)

2011-11-14 Thread waltdnes
On Sat, Nov 12, 2011 at 05:39:54PM +0100, meino.cra...@gmx.de wrote
 Fredric Johansson fredric.miscm...@gmail.com [11-11-12 17:32]:
 
 But: Now and when I may did install it accidently it is/was true, that
 Xemacs was needed and would be installed. Why was it pulled in now,
 if it would have been installed when I installed app-xemacs/emerge
 accidently?

  Here's what you may have done ***WITHOUT*** the -

waltdnes@d531 ~ $ emerge -p emerge vim

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  N ] app-admin/eselect-emacs-1.13 
[ebuild  N ] media-libs/audiofile-0.3.1  USE=-static-libs 
[ebuild   R] app-editors/vim-7.3.266 
[ebuild  N ] app-editors/xemacs-21.4.22-r2  USE=X gif jpeg png tiff -Xaw3d 
-athena -berkdb -canna -dnd -eolconv -esd -freewnn -gdbm -gpm -ldap -motif 
-mule -nas -neXt -pop -postgres -xface -xim 
[ebuild  N ] app-xemacs/xemacs-base-2.27 
[ebuild  N ] app-xemacs/emerge-1.11

  Pulls in emacs.  Talk about an accident waiting to happen...

-- 
Walter Dnes waltd...@waltdnes.org



[gentoo-user] Tempertaure of NVidia GPUs

2011-11-14 Thread meino . cramer
Hi,

is there any tool to read out the temperature of NVidia GPUs other
than the NVidia Setting GUI and nvclock ?

Thank you very much in advance for any help!
Best regards,
mcc




[gentoo-user] Re: Tempertaure of NVidia GPUs

2011-11-14 Thread Hartmut Figge
meino.cra...@gmx.de:

is there any tool to read out the temperature of NVidia GPUs other
than the NVidia Setting GUI and nvclock ?

Perhaps this one?

hafi@i5 ~ $ nvidia-smi
Tue Nov 15 05:29:24 2011
+--+
| NVIDIA-SMI 2.290.06   Driver Version: 290.06 |
|---+--+--+
| Nb.  Name | Bus IdDisp.  | Volatile ECC SB / DB |
| Fan   Temp   Power Usage /Cap | Memory Usage | GPU Util. Compute M. |
|===+==+==|
| 0.  GeForce GTX 460   | :01:00.0  N/A|   N/AN/A |
|  40%   39 C  N/A   N/A /  N/A |   6%   62MB / 1023MB |  N/A  Default|
|---+--+--|
| Compute processes:   GPU Memory |
|  GPU  PID Process name   Usage  |
|=|
|  0.   ERROR: Not Supported  |
+-+


Hartmut
-- 
Usenet-ABC-Wiki http://www.usenet-abc.de/wiki/
Von Usern fuer User  :-)




Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread Pandu Poluan
On Nov 15, 2011 9:49 AM, Grant emailgr...@gmail.com wrote:

  Sure, but you've noted that rdiff-backup is insecure if the source box
is
  violated. What you need, then, is a layer of versioning not subject to
that
  vulnerability.

 Does it exist?


Quick and dirty:

TARGDIR=/home/versions/$(date +%Y%m%d%H%M%S)
find $BACKUPDIR -exec ln {} $TARGDIR/{} \;  # make hard links; they survive
unlinking of the originals

Disclaimer: Not tested. Use it at your own risk.

Rgds,


[gentoo-user] Anybody want to beta test Gentoo with mdev instead of udev?

2011-11-14 Thread waltdnes
  After a recent thread, about udev developers wanting /usr on the same
partition as / (or else requiring initramfs), it was pretty obvious
that 90%+ of the users here strongly disliked the idea.  I went around
asking on various lists if it was possible to run Gentoo without udev.
After some research, and various unrelated delays, I've come up with a
working Gentoo without udev.  It turns out that busybox's mdev
implementation is sufficient for my needs.  I do the usual email, web
surfing, including Youtube.  I'm listening to Live365.com as I type this
email, so Flash works just fine.  Contrary to the FUD I've heard, X
works just fine, thank you, without an xorg.conf.  Modern flatscreens
with EDID info are set up automatically.  I suppose that old CRT
monitors without EDID info might require xorg.conf, but that's exotic
hardware nowadays.  The only change I notice is somewhat faster bootup.

  The purpose of this email is to ask adventurous people here to beta
test my approach to a udev-less Gentoo.  If we don't find any
showstopper problems, we can think about requesting Gentoo developers to
support an mdev-based profile.  It would help the cause if a large
number of testers can report that it works for them.  The instructions
for a udev-ectomy follow below.  Thanks to Zac Medico and others on the
Gentoo developers' list for their helpful hints and pointers on how to
do this.  I couldn't have figured this out by myself.

  The usual warnings apply...
* this is a beta
* use a spare test machine
* if you don't follow the instructions correctly, the result might be
  an unbootable linux
* even if you do follow instructions, the result might be an unbootable
  linux


1) Set up your kernel to support and automount a devtmpfs filesystem at
   /dev

* If you prefer to edit .config directly, set
  CONFIG_DEVTMPFS=y and CONFIG_DEVTMPFS_MOUNT=y

* If you prefer make menuconfig, the route is as shown below.  Note
  that the Autount devtmpfs... option won't appear until you enable
  Maintain a devtmpf... option.

make menuconfig
  Device Drivers  ---
Generic Driver Options  ---
  [*] Maintain a devtmpfs filesystem to mount at /dev
  [*]   Automount devtmpfs at /dev, after the kernel mounted the rootfs

  Once you've made the changes, rebuild the kernel.


2) Set up for emerging busybox, there are 2 items to change

A) It appears that there may be an mdev bug in older versions of
   busybox.  To avoid that bug, keyword busybox-1.19.2 in
   /etc/portage/package.keywords  E.g. if you're using 32-bit Gentoo on
   Intel, the incantation is...

=sys-apps/busybox-1.19.2 ~x86

   Change the ~x86 to reflect your architecture, etc.

B) busybox requires the mdev flag in this situation.  The static
flag is probably also a good idea.  In file /etc/portage/package.use
add the line

sys-apps/busybox static mdev

   Now, emerge busybox


3) In the bootloader append line, include init=/sbin/linuxrc where
   the file /sbin/linuxrc consists of *AT LEAST*...

#!/sbin/busybox ash
mount -t proc proc /proc
mount -t sysfs sysfs /sys
exec /sbin/init

   This should be enough for most users.  If you have an unusual setup,
   you may need additional stuff in there.  If you're using lilo remember
   to re-run lilo to implement the changes.

4) Remove udev from the services list, and replace it with mdev.  Type
   the following 2 commands at the command line
rc-update del udev sysinit
rc-update add mdev sysinit


5) reboot to your new kernel.  You're now running without using udev.


6) ***THIS STEP IS OPTIONAL***  This is only to alay any suspicion that
   udev is still in use.  udev is pulled in by virtual/dev-manager,
   which in turn is pulled in by the kernel.
* cd /usr/portage/virtual/dev-manager
* Make a backup copy of dev-manager-0.ebuild
* Edit dev-manager-0.ebuild to include sys-apps/busybox as one option
  in RDEPEND, like so...

RDEPEND=|| ( sys-fs/udev
sys-fs/devfsd
sys-apps/busybox
sys-fs/static-dev
sys-freebsd/freebsd-sbin )

  I had really wanted to use sys-apps/busybox[mdev], but an EAPI-0
  ebuild can't handle that syntax.

* execute the following 3 commands at the commandline
ebuild dev-manager-0.ebuild digest
emerge -1 dev-manager
emerge --unmerge sys-fs/udev

* In file /atc/portage/package.mask, append the line
sys-fs/udev
  Create the file if it doesn't already exist.  You now have a totally
  udev-free machine

-- 
Walter Dnes waltd...@waltdnes.org



[gentoo-user] Re: Firefox desktop icon missing on e17

2011-11-14 Thread Mick
On Monday 14 Nov 2011 22:53:50 Mick wrote:
 This may apply to other desktops but the problem I report here happens on
 the e17 rev. 64957
 
 Following the update to firefox-7.0.1 no icon for firefox shows up on the
 iBar. Looking at /home/michael/.local/share/applications/ I see a file
 mozilla- firefox-3.6.desktop file - instead of firefox-7.0.1
 
 Shouldn't this file have been updated?  How can I fix it?

OK, the problem was that I had that mozilla- firefox-3.6.desktop in 
/home/michael/.local/share/applications/.  I removed it and the WM picked up 
/usr/share/applications/firefox.desktop.

All works as expected now.
-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] {OT} Are push backups flawed?

2011-11-14 Thread J. Roeleveld
On Tue, November 15, 2011 3:32 am, Grant wrote:
 You identified a flaw in the system as you were using it. You're right,
 those are flaws. However, you can  fix those flaws by applying some
 magic
 as a sysadmin. That's why several posts in the thread have mentioned
 versioning your backups in some fashion. I've mentioned lvm a couple
 times.

 I thought versioning meant that you could roll back to a previous
 version.  rdiff-backup provides that.

It's part of it.

 I think someone else mentioned pulling the backup target's data to
 another
 locale, either via a pull from another server, or via something like a
 traditional incremental tape backup.

 So the systems push to the backup server and a second backup server
 pulls from the first backup server?  Should the second backup server
 use rdiff-backup against the rdiff-backup repository on the first
 backup server?  I think I've read that's not a good idea.

Not sure, I don't use rdiff-backup. Am considering it for the desktops
once the new server is in place.

 What does everybody else do?  I feel like the first person to ever
 attempt secure automated backups.

For more secure backups, you could use backup-utilities that support
incremental backups.
dar springs to mind. So do larger automated systems.

As my servers are all virtual machines running on Xen, I configured pull
style backups.

For the desktops, I am planning the following:
rdiff-backup (or similar) to push backups from the desktops to the
server. Adding hardlinks as already suggested for simple versioning.
The backup-script in the desktop will do 2 things:
1) rdiff-backup
2) instruct the backup-server to create the hardlinks with versioning

Then, at regular intervals, this will be backed up by pull from the
Host-domain on the server.

I don't see any chance to kill all my backups as the data will remain,
even when deleting the backup-directory of a desktop.

--
Joost





[gentoo-user] swapping processor problem

2011-11-14 Thread Raffaele BELARDI
I have two gentoo boxes, X has an ASUS M2NPV-VM with AMD64 3500+ CPU, Y
has a AMD64 X2 5600+ CPU. Since I need more juice on X I thought I could
swap CPUs.

After updating X's BIOS the system with the 'new' CPU boots up to the
MythTv screen with no error but does not respond to the USB keyboard nor
to the remote control keypresses. More precisely:

- keyboard is fine at grub boot, I can select up and down or edit the
entries
- keyboard is no longer responsive at the init scripts start (when you
can press 'I' to select services)

My first thought was that X (and Y) WERE compiled with '-march=native'
GCC flag and maybe the 5600+ does not execute properly the 3500+ code.
But a quick search on wikipedia shows that 5600+ has a superset of the
3500+ so I should have problems putting the 3500+ in the Y box, not
vice-versa.

Any suggestions?

raffaele


Re: [gentoo-user] Anybody want to beta test Gentoo with mdev instead of udev?

2011-11-14 Thread Pandu Poluan
On Nov 15, 2011 1:24 PM, waltd...@waltdnes.org wrote:

  After a recent thread, about udev developers wanting /usr on the same
 partition as / (or else requiring initramfs), it was pretty obvious
 that 90%+ of the users here strongly disliked the idea.  I went around
 asking on various lists if it was possible to run Gentoo without udev.
 After some research, and various unrelated delays, I've come up with a
 working Gentoo without udev.  It turns out that busybox's mdev
 implementation is sufficient for my needs.  I do the usual email, web
 surfing, including Youtube.  I'm listening to Live365.com as I type this
 email, so Flash works just fine.  Contrary to the FUD I've heard, X
 works just fine, thank you, without an xorg.conf.  Modern flatscreens
 with EDID info are set up automatically.  I suppose that old CRT
 monitors without EDID info might require xorg.conf, but that's exotic
 hardware nowadays.  The only change I notice is somewhat faster bootup.

  The purpose of this email is to ask adventurous people here to beta
 test my approach to a udev-less Gentoo.  If we don't find any
 showstopper problems, we can think about requesting Gentoo developers to
 support an mdev-based profile.  It would help the cause if a large
 number of testers can report that it works for them.  The instructions
 for a udev-ectomy follow below.  Thanks to Zac Medico and others on the
 Gentoo developers' list for their helpful hints and pointers on how to
 do this.  I couldn't have figured this out by myself.

  The usual warnings apply...
 * this is a beta
 * use a spare test machine
 * if you don't follow the instructions correctly, the result might be
  an unbootable linux
 * even if you do follow instructions, the result might be an unbootable
  linux


 1) Set up your kernel to support and automount a devtmpfs filesystem at
   /dev

 * If you prefer to edit .config directly, set
  CONFIG_DEVTMPFS=y and CONFIG_DEVTMPFS_MOUNT=y

 * If you prefer make menuconfig, the route is as shown below.  Note
  that the Autount devtmpfs... option won't appear until you enable
  Maintain a devtmpf... option.

 make menuconfig
  Device Drivers  ---
Generic Driver Options  ---
  [*] Maintain a devtmpfs filesystem to mount at /dev
  [*]   Automount devtmpfs at /dev, after the kernel mounted the rootfs

  Once you've made the changes, rebuild the kernel.


 2) Set up for emerging busybox, there are 2 items to change

 A) It appears that there may be an mdev bug in older versions of
   busybox.  To avoid that bug, keyword busybox-1.19.2 in
   /etc/portage/package.keywords  E.g. if you're using 32-bit Gentoo on
   Intel, the incantation is...

 =sys-apps/busybox-1.19.2 ~x86

   Change the ~x86 to reflect your architecture, etc.

 B) busybox requires the mdev flag in this situation.  The static
 flag is probably also a good idea.  In file /etc/portage/package.use
 add the line

 sys-apps/busybox static mdev

   Now, emerge busybox


 3) In the bootloader append line, include init=/sbin/linuxrc where
   the file /sbin/linuxrc consists of *AT LEAST*...

 #!/sbin/busybox ash
 mount -t proc proc /proc
 mount -t sysfs sysfs /sys
 exec /sbin/init

   This should be enough for most users.  If you have an unusual setup,
   you may need additional stuff in there.  If you're using lilo remember
   to re-run lilo to implement the changes.

 4) Remove udev from the services list, and replace it with mdev.  Type
   the following 2 commands at the command line
 rc-update del udev sysinit
 rc-update add mdev sysinit


 5) reboot to your new kernel.  You're now running without using udev.


 6) ***THIS STEP IS OPTIONAL***  This is only to alay any suspicion that
   udev is still in use.  udev is pulled in by virtual/dev-manager,
   which in turn is pulled in by the kernel.
 * cd /usr/portage/virtual/dev-manager
 * Make a backup copy of dev-manager-0.ebuild
 * Edit dev-manager-0.ebuild to include sys-apps/busybox as one option
  in RDEPEND, like so...

 RDEPEND=|| ( sys-fs/udev
sys-fs/devfsd
sys-apps/busybox
sys-fs/static-dev
sys-freebsd/freebsd-sbin )

  I had really wanted to use sys-apps/busybox[mdev], but an EAPI-0
  ebuild can't handle that syntax.

 * execute the following 3 commands at the commandline
 ebuild dev-manager-0.ebuild digest
 emerge -1 dev-manager
 emerge --unmerge sys-fs/udev

 * In file /atc/portage/package.mask, append the line
 sys-fs/udev
  Create the file if it doesn't already exist.  You now have a totally
  udev-free machine


Sounds nice!

However, my Gentoo systems are all virtual servers (DomU VMs on XenServer).
So, the hardware devices are static. Will switching over to mdev give any
benefits?

I even am toying around with the idea of having a completely static /dev,
but still can't find any guide/pointers yet.

(Apologies if my email is OOT)

Rgds,


[gentoo-user] multi-threaded mplayer

2011-11-14 Thread Raffaele BELARDI
Do I need to set any particular USE flag to enable multi-threaded
decoding with mplayer, or is it just a matter of passing the appropriate
'threads=' on the command line?

raffaele