Re: [gentoo-user] Users hi!

2012-11-15 Thread BRM
 From: Dale rdalek1...@gmail.com

Joshua Murphy wrote:
On Wed, Nov 14, 2012 at 4:30 PM, Dale rdalek1...@gmail.com wrote:
BRM wrote:
snip spam 
Hey,
Check this out:
List-Unsubscribe: mailto:gentoo-desktop+unsubscr...@lists.gentoo.org 
Bye.   Dale :-)  :-) 
P.S.  I wonder if he will get the hint.  LOL  
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words! 
Probably not, since it looks like a fairly hands-off spam attempt,
  but I have to say, I'm rather amused by the attempt to spoof a
  Microsoft based site (in url and content) while spamming a Linux
  mailing list. It's first-line bait to pull someone into an 'online
  employment' scam, by the looks of it, with the added benefit of ad
  revenue from those who load that page with a standard browser.

Poison [BLX]
Joshua M. Murphy
Well, I did get a reply on another list.  He/she seems to have read
it at least.  Maybe he/she got the idea.  
As if anyone here would follow a link like that anyway.  It's not
like we are a bunch of crazy folks here.  lol  

First, my apologies to this list. I had gotten it from someone else, but 
bypassed my better judgement in part thinking being on Linux with Firefox were 
solution enough, which interestingly they were not.
Now, the page itself is pretty benign but it provided the spammers a way to 
attack webmail sites - e.g. Yahoo! - and go through the address book and sent 
out their own e-mails. My guess is that it had to be a hack into Firefox to 
support it, but one not yet patched at least by Kubuntu (my work laptop, which 
I do keep up to date).

Ben



Re: [gentoo-user] Lockdown: free/open OS maker pays Microsoft ransom for the right to boot on users' computers

2012-06-04 Thread BRM
 From: Michael Mol mike...@gmail.com

On Sat, Jun 2, 2012 at 10:04 PM, BRM bm_witn...@yahoo.com wrote:
 From: Michael Mol mike...@gmail.com
[snip]
 In theory that's how key signing systems are suppose to work.
 In practice, they rarely implement the blacklists as they are (i) hard to 
 maintain,
 and (ii) hard to distribute in an effective manner.

Indeed. While Firefox, Chromium, et al check certificate revocation
lists, Microsoft doesn't; they distribute them as part of Windows
Update.


Which can then be intercepted by IT in any IT department that stages Windows 
Update using their own servers.


 Honestly, I don't expect SecureBoot to last very long.
 Either MS and the OEMs will be forced to always allow users to disable it,
 or they'll be simply drop it - kind of like they did with TPM requirements 
 that were
 talked about 10 years back and never came to fruition.

TPM is still around for organizations which can use them. And,
honestly, I've been annoyed that they haven't been widespread, nor
easy to pick up in the aftermarket. (They come with a random number
generator...just about any HRNG is going to be better than none.)


Yes TPM (originally named Palladium) is still around. However its use is almost 
non-existent.
When it was proposed, it was to include SecureBoot and enable secure Internet 
transactions, etc.
None of that came to fruition. Now, after over a decade of ignoring it, they 
are trying it one step at a time, first with SecureBoot.


I see something like SecureBoot as being useful in corporate and
military security contexts. I don't see it lasting in SOHO
environments.


Certain environments as you say may find it useful; but then those environments 
already have very stringent controls
over the computers in those environments, often to the inability of people to 
do their job.


[snip]
 What kind of signature is the bootloader checking, anyway?
 Regardless of the check, it'll never be sufficient.
Sure; ultimately, all DRM solutions get cracked.


TPM and SecureBoot will by design fail.
We'll see if SecureBoot actually even makes it to market; if it does, expect 
some Class Action lawsuits to occur.

Ben




Re: [gentoo-user] Lockdown: free/open OS maker pays Microsoft ransom for the right to boot on users' computers

2012-06-04 Thread BRM
 From: Michael Mol mike...@gmail.com

On Mon, Jun 4, 2012 at 9:33 AM, BRM bm_witn...@yahoo.com wrote:
 From: Michael Mol mike...@gmail.com

On Sat, Jun 2, 2012 at 10:04 PM, BRM bm_witn...@yahoo.com wrote:
 From: Michael Mol mike...@gmail.com
[snip]
 In theory that's how key signing systems are suppose to work.
 In practice, they rarely implement the blacklists as they are (i) hard to 
 maintain,
 and (ii) hard to distribute in an effective manner.

Indeed. While Firefox, Chromium, et al check certificate revocation
lists, Microsoft doesn't; they distribute them as part of Windows
Update.

 Which can then be intercepted by IT in any IT department that stages Windows 
 Update using their own servers.

Only if the workstation is so configured. (i.e. it's joined to the
domain, or has otherwise had configuration placed on it.) It's not
just a matter of setting up a caching proxy server and modifying the
files before they're delivered.

And if you think that's a risk, then consider that your local domain
administrator has the ability to push out the organization CA into
your system cert store as a trusted CA, and can then go on to create
global certs your browser won't complain about.

If you don't own the network, don't expect to be able to do things on
it that the network administrator doesn't want you to do. At the same
time, he can't force (much...see DHCP) configuration onto your machine
without your being aware, at least if you're at least somewhat
responsible in knowing how configuring your machine works.


True.

My point was that since Microsoft is using Windows Update to update the CRLs, 
that the corporate IT departments could decide not to allow the update to go 
through.
Of course, it's their risk if they don't allow it through. Further, they can 
push out CRLs even if Microsoft doesn't send them.

But that's not the concern unless you want your device free of the IT 
department, and that's a wholly different issue.
And of course, they can't change the CA on a WinRT device for SecureBoot.

 Honestly, I don't expect SecureBoot to last very long.
 Either MS and the OEMs will be forced to always allow users to disable it,
 or they'll be simply drop it - kind of like they did with TPM requirements 
 that were
 talked about 10 years back and never came to fruition.

TPM is still around for organizations which can use them. And,
honestly, I've been annoyed that they haven't been widespread, nor
easy to pick up in the aftermarket. (They come with a random number
generator...just about any HRNG is going to be better than none.)


 Yes TPM (originally named Palladium) is still around. However its use is 
 almost non-existent.

No, TPM wasn't originally named Palladium. TPM was the keystore
hardware component of a broader system named Palladium. The TPM is
just a keystore and a crypto accelerator, both of which are two things
valuable to _everybody_. The massive backlash against Palladium is at
least part of why even a generally useful hardware component like the
TPM never got distributed. Imagine if the floating-point coprocessor
was ditched in x86 because people thought it was a conspiracy  to
induce difficult-to-resolve math precision errors from careless use of
floating point arithmetic.

The part you're worried about is the curtained memory and hardware
lockout, which it sounds like Intel is distributing with vPro.


TPM, SecureBoot, and Palladium are both beasts which need to be removed.


 When it was proposed, it was to include SecureBoot and enable secure 
 Internet transactions, etc.
 None of that came to fruition. Now, after over a decade of ignoring it, they 
 are trying it one step at a time, first with SecureBoot.
I see something like SecureBoot as being useful in corporate and
military security contexts. I don't see it lasting in SOHO
environments.
 Certain environments as you say may find it useful; but then those 
 environments already have very stringent controls
 over the computers in those environments, often to the inability of people 
 to do their job.

The nature of those controls stems at least in part from the ability
to use other means to maintain an overall security policy. With more
tools comes the ability to be more flexible, allowing people to do
more convenient convenient things (such as insert a flash drive or CD
into a computer) at lower risk (it'll be more difficult to
accidentally boot from that flash drive or CD).


How often do people accidentally boot from the wrong device?
It's probably more of an issue for USB devices than floppy/CDs any more, but 
still.

And why destroy people's ability to boot from USB/CD/Floppy?
Let's not forget this makes it harder for Gentoo (and numerous other distros 
and OSes) to go on devices.

The user should own and control the device, not a corporate entity (except 
where said corporate entity purchased the device in the first place).


It's for similar reasons the Linux kernel has support for fine-grained
access controls; you can grant

Re: [gentoo-user] Lockdown: free/open OS maker pays Microsoft ransom for the right to boot on users' computers

2012-06-02 Thread BRM
 From: Michael Mol mike...@gmail.com

 On Sat, Jun 2, 2012 at 8:35 PM, Florian Philipp li...@binarywings.net 
 wrote:
  Am 03.06.2012 01:36, schrieb Michael Mol:
  On Sat, Jun 2, 2012 at 6:50 PM, pk pete...@coolmail.se wrote:
  On 2012-06-02 22:10, Michael Mol wrote:
 
  [snip]
 
  [...]
 
  The BIOS will only load a signed bootloader. The signed bootloader
  will only load a signed kernel. The signed kernel will...do whatever
  you tell it to do.
 
 
  According to Matthew's blog post, Fedora patched Grub2 and the kernel 
 to
  avoid loading custom code into them:
  - Deactivate grub2 plugins
  - Sign all kernel modules and disallow unsigned ones
  - Prevent access to PCI through userland
  - Sanitize the kernel command line
 
 Yeah, I read his blog post via lwn.net. I forgot some of the details.
 
 
 
  What does that mean to a source based distro?
 
  It's going to make building and installing grub and the kernel
  trickier; you'll have to get them signed. And that's going to 
 be a
  PITA for anyone who does developers.
 
  What it *really* means is that someone who wants to run Linux as a
  hobbyist or developer is going to disable SecureBoot, and 
 then fall
  back to business as usual.
 
 
  Yeah, the only way for Gentoo to have secure boot is a) let each user
  register with Microsoft, b) provide a binary kernel and boot loader.
 
 If you have a need to get a secure Gentoo boot, and you don't need to
 boot Windows 8, then (as I understand it) you can also purge the UEFI
 BIOS of Microsoft's key and install your own.

well, on x86 for now...

 
 
  Also, I would assume a legitimate key would be able to
  sign pretty much any binary so a key that Fedora uses could be used 
 to
  sign malware for Windows, which then would be blacklisted by
  Microsoft...
 
  If Fedora allows their key to sign crap, then their key will get 
 revoked.
 
  What I hope (I don't know) is whether or not the signing system
  involved allows chaining.  i.e., with SSL, I can generate my own key,
  get it signed by a CA, and then bundle the CA's public key and my
  public key when I go on to sign _another_ key.
 
  So, could I generate a key, have Fedora sign it, and then use my key
  to sign my binaries? If my key is used to do malicious things,
  Fedora's off the hook, and it's only my key which gets revoked.
 
 
  Consider the exact approach Fedora takes: They've only made a certified
  stage-1 boot loader. This boot loader then loads grub2 (signed with a
  custom Fedora key, nothing chained back to MS) which then loads a
  custom-signed kernel. This allows them to avoid authenticating against
  MS every time they update grub or the kernel.
 
  This means if you want to certify with Fedora, you don't need to chain
  up to MS as long as you use their stage-1 boot loader. However, if I was
  part of Fedora, I wouldn't risk my key by signing other people's 
 stuff.
  Mainboard makers won't look twice when they see rootkits with Fedora
  boot loaders.
 
 Yeah, that's not the kind of thing I was thinking about.
 
 With SSL's PKI, someone like StartSSL has a CA cert.
 
 I generate my own key, have StartSSL sign my key. My brother generates
 a key, and I sign his.
 
 Now my brother takes his key and sends you a signed email.
 
 Now, you've never heard of me, and the crypto signature attached to
 that email doesn't mean anything. However, if he bundles my public key
 along with his public key in that email, then you can see that my
 public key was signed by someone you _do_ know. Now you have a chain
 of signatures showing the relationship between that email and the root
 CA.
 
 Now here's the interesting part, and what I was alluding to wrt signed
 binaries and key revocation.
 
 Let's say _my_ key is leaked. My brother send you an email signed with
 his key. You look at that key, you see that key hasn't been revoked.
 You look at the key that signed that key, and you see that _that_ key
 _has_ been revoked. You can then choose to not trust keys signed by
 that key.
 
 Now let's say my _brother's_ key is leaked, and so he revokes it. Any
 new emails signed with that key can be seen to be invalid. However,
 _my_ key is still considered valid; I can still sign things with it.
 
 That's the kind of thing I was thinking about. If you allow key chains
 to be deep, rather than forcing them to be wide, you can wield
 blacklists like a scalpel, rather than a bludgeon.

In theory that's how key signing systems are suppose to work.
In practice, they rarely implement the blacklists as they are (i) hard to 
maintain,
and (ii) hard to distribute in an effective manner.

Honestly, I don't expect SecureBoot to last very long.
Either MS and the OEMs will be forced to always allow users to disable it,
or they'll be simply drop it - kind of like they did with TPM requirements that 
were
talked about 10 years back and never came to fruition.

  and how is malware defined? Anything that would be
  detrimental to Microsoft?
 
  Dunno. I imagine it 

Re: [gentoo-user] System shuts off on boot-up

2012-01-25 Thread BRM
Well, just to follow up on this - I did get the nouveau driver working and the 
system operational.

- The console still does not work; don't know why but it seems to not get a 
working resolution. Not sure how to fix that right now.
- Starting XDM (KDM) works, and brings up the login.
- I am able to SSH in from another system now.

Now, this was after I paid a little closer attention to some updates. I 
resync'd emerge, and got the 3.1.6 linux kernel, which I promptly upgraded to.
I also went back and rebuilt the X11 drivers (nouveau, keyboard, mouse, evdev), 
and paid closer attention to the output of the nouveau driver which immediately 
started off saying there was an issue with the ACPI_WMI and MXM_WMI interfaces 
from the kernel - another kernel rebuild after enabling those (it had when I 
selected the driver I think, but I kept disabling the x86 features which 
removed them) and then it didn't complain any longer.

So, I seem to have a working X environment; but no working console environment 
for now.
I'm guessing I probably need to tell the nouveau frame buffer driver what 
resolution to use to resolve that one.

Thanks all!

Ben





 From: BRM bm_witn...@yahoo.com
To: gentoo-user@lists.gentoo.org gentoo-user@lists.gentoo.org 
Sent: Sunday, January 22, 2012 8:49 PM
Subject: Re: [gentoo-user] System shuts off on boot-up
 
 From: BRM bm_witn...@yahoo.com
 To: gentoo-user@lists.gentoo.org gentoo-user@lists.gentoo.org
 Cc: 
 Sent: Sunday, January 22, 2012 11:50 AM
 Subject: Re: [gentoo-user] System shuts off on boot-up
 
  From: Neil Bothwick n...@digimed.co.uk
 
 On Fri, 20 Jan 2012 19:57:31 -0800 (PST), BRM wrote:
  As the system starts to boot-up, it switches like it is going to start
  X - changing a video mode somehow. I don't have xdm in the 
 runlevels
  yet, so it can't be starting XDM at all.This seems to happen right
  after udevd is started, while it waiting on the udev events. The system
  then just shuts off (power remain on - fans are still on, but monitors
  are off,  and nothing responds, etc.) , and it never completes boot-up.
 
 Do you have another computer you can use to test if it is alive with ping
 or SSH? This is occurring around the point at which KMS kicks in, you may
 be just losing your display but still have an otherwise working system.
 
 
 Yes SSH is enabled; no I can't SSH into it. It seems to be completely dead.
 
 Try adding nomodeset (or intel.modeset=0) to your kernel boot parameters
 to disable KMS.
 
 
 Ok. Setting nomodeset works. However, if I understand the nouveau 
 driver correctly then that won't work for using the nouveau driver as it 
 requires KMS.
 
 Digging a little deeper into the nouveau driver and KMS[1], I discovered 
 that I 
 probably need to have CONFIG_VT_HW_CONSOLE_BINDING set in the kernel config 
 as 
 well - which it wasn't. So that probably explains what was happening as 
 CONFIG_HW_CONSOLE was set, so there may have been two drivers competing for 
 fb0.
 
 Now off to build a new kernel...

Well, that doesn't seem to have been the only problem at least...still don't 
know what's doing it.

Ben








Re: [gentoo-user] System shuts off on boot-up

2012-01-25 Thread BRM
- Original Message -

 From: Mick michaelkintz...@gmail.com
 Does it work satisfactorily with kernel 3.0.6?
 
 I found the 3.1.6 breaking suspend on my machine so have gone back to 3.0.6, 
 but my hardware and video driver is different to yours.

Haven't tried 3.0.6 with the kernel change for the ACPI_WMI and MXM_WMI options 
being set.
I would imagine so as that is probably the issue - not having all the kernel 
options set correctly usually does interesting things like that.

Ben




Re: [gentoo-user] System shuts off on boot-up

2012-01-22 Thread BRM
 From: Neil Bothwick n...@digimed.co.uk

On Fri, 20 Jan 2012 19:57:31 -0800 (PST), BRM wrote:
 As the system starts to boot-up, it switches like it is going to start
 X - changing a video mode somehow. I don't have xdm in the runlevels
 yet, so it can't be starting XDM at all.This seems to happen right
 after udevd is started, while it waiting on the udev events. The system
 then just shuts off (power remain on - fans are still on, but monitors
 are off,  and nothing responds, etc.) , and it never completes boot-up.

Do you have another computer you can use to test if it is alive with ping
or SSH? This is occurring around the point at which KMS kicks in, you may
be just losing your display but still have an otherwise working system.


Yes SSH is enabled; no I can't SSH into it. It seems to be completely dead.

Try adding nomodeset (or intel.modeset=0) to your kernel boot parameters
to disable KMS.


Ok. Setting nomodeset works. However, if I understand the nouveau driver 
correctly then that won't work for using the nouveau driver as it requires KMS.

Digging a little deeper into the nouveau driver and KMS[1], I discovered that I 
probably need to have CONFIG_VT_HW_CONSOLE_BINDING set in the kernel config as 
well - which it wasn't. So that probably explains what was happening as 
CONFIG_HW_CONSOLE was set, so there may have been two drivers competing for fb0.

Now off to build a new kernel...

Thanks,

Ben


[1]http://nouveau.freedesktop.org/wiki/KernelModeSetting




Re: [gentoo-user] System shuts off on boot-up

2012-01-22 Thread BRM
 From: BRM bm_witn...@yahoo.com
 To: gentoo-user@lists.gentoo.org gentoo-user@lists.gentoo.org
 Cc: 
 Sent: Sunday, January 22, 2012 11:50 AM
 Subject: Re: [gentoo-user] System shuts off on boot-up
 
  From: Neil Bothwick n...@digimed.co.uk
 
 On Fri, 20 Jan 2012 19:57:31 -0800 (PST), BRM wrote:
  As the system starts to boot-up, it switches like it is going to start
  X - changing a video mode somehow. I don't have xdm in the 
 runlevels
  yet, so it can't be starting XDM at all.This seems to happen right
  after udevd is started, while it waiting on the udev events. The system
  then just shuts off (power remain on - fans are still on, but monitors
  are off,  and nothing responds, etc.) , and it never completes boot-up.
 
 Do you have another computer you can use to test if it is alive with ping
 or SSH? This is occurring around the point at which KMS kicks in, you may
 be just losing your display but still have an otherwise working system.
 
 
 Yes SSH is enabled; no I can't SSH into it. It seems to be completely dead.
 
 Try adding nomodeset (or intel.modeset=0) to your kernel boot parameters
 to disable KMS.
 
 
 Ok. Setting nomodeset works. However, if I understand the nouveau 
 driver correctly then that won't work for using the nouveau driver as it 
 requires KMS.
 
 Digging a little deeper into the nouveau driver and KMS[1], I discovered that 
 I 
 probably need to have CONFIG_VT_HW_CONSOLE_BINDING set in the kernel config 
 as 
 well - which it wasn't. So that probably explains what was happening as 
 CONFIG_HW_CONSOLE was set, so there may have been two drivers competing for 
 fb0.
 
 Now off to build a new kernel...

Well, that doesn't seem to have been the only problem at least...still don't 
know what's doing it.

Ben




[gentoo-user] System shuts off on boot-up

2012-01-20 Thread BRM
I am working on trying to get my AMD64 system back online. I recently rebuilt 
it (from scratch) after a very bad case of being out of date and build issues 
as a result (for numerous reasons). However, after I started trying to get X 
configured (Xorg) with the nouveau driver (I think I ran the proprietary nVidia 
driver before) it is now shutting off during boot-up.

As the system starts to boot-up, it switches like it is going to start X - 
changing a video mode somehow. I don't have xdm in the runlevels yet, so it 
can't be starting XDM at all.This seems to happen right after udevd is started, 
while it waiting on the udev events. The system then just shuts off (power 
remain on - fans are still on, but monitors are off,  and nothing responds, 
etc.) , and it never completes boot-up.

Note: Xorg won't load yet as I am still figuring out the drivers.

I'm out of my mind in trying to figure out what is wrong with the system.

Ben


Re: [gentoo-user] Gentoo counter?

2011-09-12 Thread BRM
Well...to get back on topic...



- Original Message -
 From: David W Noon dwn...@ntlworld.com
 On Sun, 11 Sep 2011 13:52:53 +0700, Pandu Poluan wrote about
 [gentoo-user] Gentoo counter?:
 
 I've just read about the 'new' Linux Counter from a slashdot 
 article,
 and I wonder: is there a 'Gentoo Counter' that tracks (voluntarily, 
 of
 course) the number of active Gentoo systems in the world?
 
 Why not just look at Linux Counter and see how many run Gentoo?
 
 The Linux Counter collects the distro information, so there is no need
 for a separate counter for each distro.
 

Linux Counter is but one of several projects trying to do that. It is perhaps 
the more well known and more public.
According to Wikipedia there is also another big project lead by Fedora called 
Smolt, which is also available in Gentoo.

http://en.wikipedia.org/wiki/Smolt_%28Linux%29

Now, I don't now if the Gentoo folks patched Smolt to report to Gentoo 
infrastructure instead of Fedora infrastructure (probably a good thing to do).
And I haven't tried it myself (yet) - though I do have LinuxCounter listings 
for most of my systems (not automatically updated).

$0.02

Ben




Re: [gentoo-user] Wireless Configuration...

2011-09-09 Thread BRM
- Original Message -

 From: Mick michaelkintz...@gmail.com
 Subject: Re: [gentoo-user] Wireless Configuration...
 OK, so if you restore the two lines and this error goes away, can you then 
 initialise the device without any other errors?

So far as I am aware.

 Assuming that rfkill shows all is unlocked and the device active, what does 
 iwlist wlan0 scan show now?

The output I quoted was from that configuration.

- Original Message -
 From: Moritz Schlarb m...@moritz-schlarb.de
 Subject: [gentoo-user] Re: Wireless Configuration...
 Am 07.09.2011 16:06, schrieb Michael Mol:
  I believe NetworkManager provides WPA supplicant functionlaity, so I
  don't think you need wpa_supplicant if you have NetworkManager. 
 It's
  been a *long* time (about five years) since I messed with wireless
  configuration daemons, though. Lots of things can change in that time,
  including memory...

 I don't think so! NetworkManager generates a configuration file on the
 fly for wpa_supplicant, so you still need it, you just don't need to
 configure it anywhere else than NetworkManager!

So NetworkManager/KNetworkManager generates a wpa_supplicant.conf on the fly to 
use, thereby ignoring the one in /etc/wpa_supplicant?
Would it then be correct that it also ignores the settings in /etc/conf.d/net?

Ben




Re: [gentoo-user] Wireless Configuration...

2011-09-08 Thread BRM
- Original Message -

 From: Mick michaelkintz...@gmail.com
 On Thursday 08 Sep 2011 04:52:44 BRM wrote:
  - Original Message -
 
   From: Mick michaelkintz...@gmail.com
 
   Hmm ... what is the error/warning that comes up?
 
  pneumo-martyr wpa_supplicant # /etc/init.d/net.wlan0 start 
   * Bringing up interface wlan0
   *   Starting wpa_supplicant on wlan0 ...
  Line 17: WPA-PSK accepted for key management, but no PSK configured.
  Line 17: failed to parse network block.
  Failed to read or parse configuration
  '/etc/wpa_supplicant/wpa_supplicant.conf'. *   start-stop-daemon: 
 failed
  to start
  `/usr/sbin/wpa_supplicant'                                              
   
                         [ !! ] * ERROR: net.wlan0 failed to start
 
 Ah!  This shows that your /etc/wpa_supplicant/wpa_supplicant.conf has 
 something wrong with it and it can't be parsed.  Please check the file's 
 
 access rights and its contents.  This is what it looks like here:
 
 $ ls -la /etc/wpa_supplicant/wpa_supplicant.conf
 -rw-r--r-- 1 root root 33388 Jun 14 15:02 
 /etc/wpa_supplicant/wpa_supplicant.conf

That error only comes up when those two lines are commented out. If I return 
them, then all is fine.
 
   # iwlist wlan0 scanning
 
  Simply returns:
 
  wlan0            No scan results
 
 Your device has not been initiated, therefore it would not be able to scan 
 until then.

True.

  It also returns 0. I have wlan0 logs directed to 
 /var/log/net/wireless,
  here's the output from the last attempt:
 
  Sep  7 23:01:43 pneumo-martyr NetworkManager: info  (wlan0): driver
  supports SSID scans (scan_capa 0x01). Sep  7 23:01:43 pneumo-martyr
  NetworkManager: info  (wlan0): new 802.11 WiFi device (driver:
  'b43legacy') Sep  7 23:01:43 pneumo-martyr NetworkManager: 
 info 
  (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 Sep  7
  23:01:43 pneumo-martyr NetworkManager: info  (wlan0): now managed 
 Sep  7
  23:01:43 pneumo-martyr NetworkManager: info  (wlan0): device state
  change: 1 - 2 (reason 2) Sep  7 23:01:43 pneumo-martyr NetworkManager:
  info  (wlan0): bringing up device. Sep  7 23:01:43 pneumo-martyr 
 kernel:
  ADDRCONF(NETDEV_UP): wlan0: link is not ready Sep  7 23:01:43
  pneumo-martyr NetworkManager: info  (wlan0): preparing device. Sep  
 7
  23:01:43 pneumo-martyr NetworkManager: info  (wlan0): deactivating
  device (reason: 2). Sep  7 23:01:43 pneumo-martyr NetworkManager: 
 info 
  (wlan0): supplicant interface state:  starting - ready Sep  7 23:01:43
  pneumo-martyr NetworkManager: info  (wlan0): device state change: 2 
 - 3
  (reason 42)
 
  That's about as far as I have been able to get tonight.
 
 Just in case, can you please check that rfkill lists both soft and hard locks 
 are *not* on? 

I have checked rfkill quite a bit. For a while, it was an issue whenever I 
restarted the wlan0 - I'd have to stop wlan0, rfkill unblock all, then start 
wlan0 again to get a connection. Very annoying.
Using KNetworkManager I have found it on occasion being blocked, but mostly 
unblocked.

 Also, what is your wireless NIC?  It may be worth checking that you are still 
 using the correct driver for your wireless chipset?
 http://linuxwireless.org/en/users/Drivers/b43
 and that you are using the latest firmware?
 http://linuxwireless.org/en/users/Drivers/b43#Device_firmware_installation

Sadly, it's a Dell TrueMobile 1300, which used the BroadCom 4306/Rev2 chipset.
There's only one version of the firmware usable for it, and the b43-legacy 
driver is the only one that supports it.

I am still trying to find a good replacement. Since I want a 802.11n capable 
replacement, finding a new mini-PCI card is hard. (Intel only has mini-PCIe.)
Finding a decently supported PCMCIA/PC Card card (Type 1 or 2) is also hard 
- most that are supported are only the 2.4GHZ range, and I'd like to use the 
5GHZ range for 802.11n with the 2.4 GHZ for 802.11g.
Simply put, I'd like to take full advantage of 802.11n and finding something 
capable and supported is proving difficult. The linuxwireless.org website is 
not very helpful in that respect either.

So, yes - I'm full open to replacement suggestions. I'd much rather have a 
fully supported Atheros-based card, and I'm getting tired of looking too.

Ben




Re: [gentoo-user] Wireless Configuration...

2011-09-07 Thread BRM
- Original Message -

 From: Mick michaelkintz...@gmail.com
 On Tuesday 06 Sep 2011 15:24:33 BRM wrote:
  - Original Message -
   From: Mick michaelkintz...@gmail.com
   On Saturday 03 Sep 2011 15:14:27 BRM wrote:
    - Original Message -
   I think the above should be either:
   
     ctrl_interface=/var/run/wpa_supplicant
     ctrl_interface_group=wheel
   
   or,
   
     DIR=/var/run/wpa_supplicant GROUP=wheel
 
  Ok. Corrected that to the first one.
 
 Fine.  I note that you said the wpa_gui won't scan further down this thread, 
 
 just in case ... is your user part of the wheel group?

Yes, so I can use sudo.
 
    #ctrl_interface_group=wheel
    ap_scan=1
    fast_reauth=1
    # This blank configuration will automatically use DHCP for any 
 net.*
    # scripts in /etc/init.d.  To create a more complete 
 configuration,
    # please review /etc/conf.d/net.example and save your 
 configuration
    # in /etc/conf.d/net (this file :]!).
    
    # Standard Network:
    config_eth0=( dhcp )
   
   The old syntax you use here, which was ( value ) is now 
 deprecated. 
   You
   should replace all such entries by removing the brackets, e.g. the 
 above
   becomes:
   
   config_eth0=dhcp
   
   This is explained in: 
 http://www.gentoo.org/doc/en/openrc-migration.xml
 
  Corrected that one too. eth0 was working fine though.
 
 Yes, because eth0 will default to dhcp, after the old syntax you were using 
 errors out or is ignored.

Ok.
 
   modules=wpa_supplicant
   wpa_supplicant_wlan0=-Dwext
   config_wlan0=dhcp
 
  I re-enabled those and added the last line.
 
 OK, wpa_supplicant should now work as intended.
 
 
   You need to add or uncomment the following to your 
 wpa_supplicant.conf:
   =
   network={
           key_mgmt=NONE
           priority=0
   }
   =
   The above will let latch on the first available AP.
 
  I wasn't sure that that one was for. I've re-enabled it and the 
 original
  one for my network. 
 
 OK, this is useful for open AP which accept connections.  If they need 
 encryption you can add this using the wpa_gui.

Interesting. Good to know. Thanks!
 
   Also, you can then add any AP of preference with passphrases and what
   not: =
   # Home Network
   network={
         ssid=MY-NETWORK
   #      key_mgmt=IEEE8021X  --You don't need these entries 
 here, unless
   #      eap=TLS             --you run SSL certs for authentication
         wep_key0=DEADBEAF0123456789ABCDEF000
         priority=1
         auth_alg=OPEN
   }
   =
 
  Interestingly, wpa_supplicant complains if those two lines are not there
  even though I am not doing SSL auth. 
 
 Hmm ... what is the error/warning that comes up?

I'll have to check after I get home.
 
 Either way, can you please add:
 
 eapol_version=1

Will do this evening.
 
  I'd rather use the NetworkManager in KDE than wpa_gui.
  That said, NetworkManager in KDE seems to be using wicd for some reason.
 You need someone else to chime in here, because I use neither of these.  As 
 far as I read in this M/L wicd is more or less fool-proof.
  I also have KDE running under Kubuntu on my work computer (4.6.2) and the
  Network Manager is completely different (don't know why) - it's not 
 wicd
  as far as I can tell.
 
  However, They are still not working. wpa_gui refuses to scan and find
  networks; while wicd is not finding networks either - but there's so
  little information in the GUI that it is practically useless to say why.
  Perhaps I've got something at the KDE layer screwed up?
 I don't know if one is causing a clash with the other, so don't try to 
 use 
 both at the same time.  If wicd is started automatically when you boot/login, 
 then just use that.

Well, I figured this part out. Essentially, I had wpa_supplicant, and wicd 
installed.
However, what I really wanted to NetworkManager and KNetworkManager installed.
So I removed wicd, and installed NetworkManager and KNetworkManager.
I now get the interface I expected under KDE and don't need to use wpa_gui any 
more.
Still, it doesn't scan.
 
 When wpa_gui refuses to scan what message do you get?  What do the logs say.
 Also, if wpa_gui or wicd fail to scan for APs what do you get from:
 # iwlist wlan0 scanning

At least from the applications I am not getting any error messages. I'll have 
to check the logs tonight and let you know.

This morning I checked the antennae to verify they were properly connected to 
the mini-PCI card (as I had opened it up a few weeks ago to see whether it was 
mini-PCI or mini-PCIe; but I didn't remove/disconnect anything at that  time). 
Everything checked out. So it shouldn't be a hardware issue unless the card is 
completely fried for some reason.

I'll check the logs this evening and let you know.

Thanks!

Ben




Re: [gentoo-user] Wireless Configuration...

2011-09-07 Thread BRM
- Original Message -

 From: Mick michaelkintz...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Cc: 
 Sent: Tuesday, September 6, 2011 5:32 PM
 Subject: Re: [gentoo-user] Wireless Configuration...
 
 On Tuesday 06 Sep 2011 15:24:33 BRM wrote:
  - Original Message -
 
   From: Mick michaelkintz...@gmail.com
   
   On Saturday 03 Sep 2011 15:14:27 BRM wrote:
    - Original Message -
 
   I think the above should be either:
   
     ctrl_interface=/var/run/wpa_supplicant
     ctrl_interface_group=wheel
   
   or,
   
     DIR=/var/run/wpa_supplicant GROUP=wheel
 
  Ok. Corrected that to the first one.
 
 Fine.  I note that you said the wpa_gui won't scan further down this thread, 
 
 just in case ... is your user part of the wheel group?
 
    #ctrl_interface_group=wheel
    ap_scan=1
    fast_reauth=1
    # This blank configuration will automatically use DHCP for any 
 net.*
    # scripts in /etc/init.d.  To create a more complete 
 configuration,
    # please review /etc/conf.d/net.example and save your 
 configuration
    # in /etc/conf.d/net (this file :]!).
    
    # Standard Network:
    config_eth0=( dhcp )
   
   The old syntax you use here, which was ( value ) is now 
 deprecated. 
   You
   should replace all such entries by removing the brackets, e.g. the 
 above
   becomes:
   
   config_eth0=dhcp
   
   This is explained in: 
 http://www.gentoo.org/doc/en/openrc-migration.xml
 
  Corrected that one too. eth0 was working fine though.
 
 Yes, because eth0 will default to dhcp, after the old syntax you were using 
 errors out or is ignored.
 
 
   modules=wpa_supplicant
   wpa_supplicant_wlan0=-Dwext
   config_wlan0=dhcp
 
  I re-enabled those and added the last line.
 
 OK, wpa_supplicant should now work as intended.
 
 
   You need to add or uncomment the following to your 
 wpa_supplicant.conf:
   =
   network={
           key_mgmt=NONE
           priority=0
   }
   =
   The above will let latch on the first available AP.
 
  I wasn't sure that that one was for. I've re-enabled it and the 
 original
  one for my network. 
 
 OK, this is useful for open AP which accept connections.  If they need 
 encryption you can add this using the wpa_gui.
 
 
   Also, you can then add any AP of preference with passphrases and what
   not: =
   # Home Network
   network={
         ssid=MY-NETWORK
   #      key_mgmt=IEEE8021X  --You don't need these entries 
 here, unless
   #      eap=TLS             --you run SSL certs for authentication
         wep_key0=DEADBEAF0123456789ABCDEF000
         priority=1
         auth_alg=OPEN
   }
   =
 
  Interestingly, wpa_supplicant complains if those two lines are not there
  even though I am not doing SSL auth. 
 
 Hmm ... what is the error/warning that comes up?

pneumo-martyr wpa_supplicant # /etc/init.d/net.wlan0 start  
 * Bringing up interface wlan0
 *   Starting wpa_supplicant on wlan0 ...
Line 17: WPA-PSK accepted for key management, but no PSK configured.
Line 17: failed to parse network block.
Failed to read or parse configuration '/etc/wpa_supplicant/wpa_supplicant.conf'.
 *   start-stop-daemon: failed to start 
`/usr/sbin/wpa_supplicant'  
 [ !! ]
 * ERROR: net.wlan0 failed to start


 Either way, can you please add:
 
 eapol_version=1

Done.

   and something like this for WPA2:
   =
   network={
           ssid=what-ever
           proto=RSN
           key_mgmt=WPA-PSK
           pairwise=CCMP
           auth_alg=OPEN
           group=CCMP
           pskpass_123456789
           priority=5
   =
 
  I want to try to get away from adding things directly to the
  wpa_supplicant.conf file as I would rather that the connection information
  be managed by a GUI tool. 
 
 You should be able to add such details in the GUI of choice.  Adding them in 
 wpa_supplicant.conf means that they should appear already filled in the GUI.
 
 
  I'd rather use the NetworkManager in KDE than wpa_gui.
 
  That said, NetworkManager in KDE seems to be using wicd for some reason.
 
 You need someone else to chime in here, because I use neither of these.  As 
 far as I read in this M/L wicd is more or less fool-proof.
 
  I also have KDE running under Kubuntu on my work computer (4.6.2) and the
  Network Manager is completely different (don't know why) - it's not 
 wicd
  as far as I can tell.
 
  However, They are still not working. wpa_gui refuses to scan and find
  networks; while wicd is not finding networks either - but there's so
  little information in the GUI that it is practically useless to say why.
  Perhaps I've got something at the KDE layer screwed up?
 
 I don't know if one is causing a clash with the other, so don't try to 
 use 
 both at the same time.  If wicd is started automatically when you boot/login, 
 then just use that.
 
 When wpa_gui refuses to scan

Re: [gentoo-user] Wireless Configuration...

2011-09-06 Thread BRM
- Original Message -

 From: Mick michaelkintz...@gmail.com
 On Saturday 03 Sep 2011 15:14:27 BRM wrote:
  - Original Message -
   Assuming that you have built in your kernel or loaded the driver 
 module
   for your NIC and any firmware blobs have also been loaded, please 
 show:
 
  Yes. As I noted, it's worked before. The driver loads it find the 
 firmware,
  etc. Configuration information is below.
   
 
   /etc/conf.d/net
 
  # This is a network block that connects to any unsecured access point.
  # We give it a low priority so any defined blocks are preferred.
  ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
 
 I think the above should be either:
 
   ctrl_interface=/var/run/wpa_supplicant
   ctrl_interface_group=wheel
 
 or, 
 
   DIR=/var/run/wpa_supplicant GROUP=wheel

Ok. Corrected that to the first one.
 
  #ctrl_interface_group=wheel
  ap_scan=1
  fast_reauth=1
  # This blank configuration will automatically use DHCP for any net.*
  # scripts in /etc/init.d.  To create a more complete configuration,
  # please review /etc/conf.d/net.example and save your configuration
  # in /etc/conf.d/net (this file :]!).
 
  # Standard Network:
  config_eth0=( dhcp )

 The old syntax you use here, which was ( value ) is now deprecated.  
 You 
 should replace all such entries by removing the brackets, e.g. the above 
 becomes:
 
 config_eth0=dhcp
 
 This is explained in: http://www.gentoo.org/doc/en/openrc-migration.xml

Corrected that one too. eth0 was working fine though.
 
  dns_domain_lo=coal
  # Wireless Network:
  # TBD
  #config_wlan0 ( wpa_supplicant )
  #
 
  # Enable this to use WPA supplicant; however, need to change the
  configuration of the Wireless first. modules=( !plug 
 !iwconfig
  wpa_supplicant )
  #modules=( !plug wpa_supplicant )
  #modules=(iwconfig)
  #wpa_supplicant_wlan0=-Dwext
  #wpa_timeout_wlan0=15
 
  #modules=(iwconfig)
  #iwconfig_wlan0=mode managed
  #wpa_timeout_wlan0=15
 
 You should also add something like:
 
 modules=wpa_supplicant
 wpa_supplicant_wlan0=-Dwext
 config_wlan0=dhcp

I re-enabled those and added the last line.
 
 
   and 
   
   grep ^[^#] /etc/wpa_supplicant/wpa_supplicant.conf
 
  ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
  ap_scan=1
  fast_reauth=1
  country=US
 
  # Home Network
  #network={
  #       ssid=MY-NETWORK
  #       key_mgmt=IEEE8021X
  #       eap=TLS
  #       wep_key0=DEADBEAF0123456789ABCDEF000
  #       priority=1
  #       auth_alg=SHARED
  #}
  #
  #network={
  #       key_mgmt=NONE
  #       priority=-999
  #}
 
  The network information is commented out as I was trying to get it to work
  with the normal user-space tools (e.g. Network Manager); however, it is no
  longer working in that configuration either. It doesn't seem to ever 
 get
  to doing the SCAN portion of trying to find networks.
 
  I can see wlan0 in wpa_gui, but I can't get it to scan at all. And 
 I'd much
  rather use Network Manager if I could over wpa_gui; but it doesn't even
  see wlan0 (it happily finds eth0, my wired NIC.)
 
  Ben
 
 You need to add or uncomment the following to your wpa_supplicant.conf:
 =
 network={
         key_mgmt=NONE
         priority=0
 }
 =
 The above will let latch on the first available AP.

I wasn't sure that that one was for. I've re-enabled it and the original one 
for my network.
 
 Also, you can then add any AP of preference with passphrases and what not:
 =
 # Home Network
 network={
       ssid=MY-NETWORK
 #      key_mgmt=IEEE8021X  --You don't need these entries here, unless
 #      eap=TLS             --you run SSL certs for authentication
       wep_key0=DEADBEAF0123456789ABCDEF000
       priority=1
       auth_alg=OPEN
 }
 =

Interestingly, wpa_supplicant complains if those two lines are not there even 
though I am not doing SSL auth.
 
 and something like this for WPA2:
 =
 network={
         ssid=what-ever
         proto=RSN
         key_mgmt=WPA-PSK
         pairwise=CCMP
         auth_alg=OPEN
         group=CCMP
         pskpass_123456789
         priority=5
 =

I want to try to get away from adding things directly to the 
wpa_supplicant.conf file as I would rather that the connection information be 
managed by a GUI tool.
 
 Something like the above should get you online again, but you may need to 
 experiment with different settings depending on the encryption used by the 
 chosen AP.
 
 When wardriving open the wpa_gui, scan and double-click on your desired AP.  
 Then enter the key for it (if it has one) and you should be able to 
 associate.  
 At that point dhcpcd will kick in and you'll get an IP address and be able 
 to 
 connect to the Internet (as long as the AP is not asking for DNS 
 authentication or some such security measure).
 
 Of course if you use networkmanager you do not need to use wpa_gui.

I'd rather use the NetworkManager in KDE than

Re: [gentoo-user] Wireless Configuration...

2011-09-03 Thread BRM
- Original Message -

 From: Mick michaelkintz...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Cc: 
 Sent: Friday, September 2, 2011 11:29 AM
 Subject: Re: [gentoo-user] Wireless Configuration...
 
 On Friday 02 Sep 2011 14:38:56 BRM wrote:
  - Original Message -
 
   From: Canek Peláez Valdés can...@gmail.com
   
   On Thu, Sep 1, 2011 at 11:52 PM, BRM bm_witn...@yahoo.com 
 wrote:
    I still haven't decided what to get for my system to replace 
 the NIC
   
   with, but the card I have should be working with my existing 802.11g
   network already; however, it doesn't - I have had to connect my 
 laptop
   via Ethernet cable to my wireless bridge to get network access.
   
    /etc/init.d/net.wlan0 starts, but goes immediately inactive. From 
 what
    I
   
   can find on-line, this seems to have been something common after 
 moving
   to Base Layout 2/OpenRC; however, I couldn't find anything that
   specified what the actual solution was - I think most ended up doing a
   complete reinstall of their wicd/wpa-supplicant software - either way
   details were lacking.  I've successfully had wpa-supplicant 
 working in
   the past, and as a result of all of this I've tried to get it up 
 through
   the other method too (iwconfig?), but no success. (I think I have
   managed to get it to scan some, but not sufficiently and certainly no
   connections.)
   
   Did you followed the instructions at
   
   http://www.gentoo.org/doc/en/openrc-migration.xml
   
   specifically the network section?
 
  Yes, I believe so. It's been a while since I made the migration, but 
 the
  wireless configuration seems to have broken about the same time.
 
  The wired configuration works just fine, and the guide mentions nothing
  about Wireless changes - e.g. WPA Supplicant - and that's where the
  problem is. 
 
    Anyone see this issue and know what the solution is? I'd like 
 to at
   
   least get my 802.11g access back - the current setup is a bit of a 
 pain
   and very limiting.
   
   Since you use a laptop, I will assume you have either KDE, GNOME or
   Xfce. If that's the case, why don't you try NetworkManager or 
 connman,
   and use the GUI thingy to do the work for you? I haven't manually
   configured a wireless network in years, and I have been the last three
   months traveling with my laptop literally all over the world,
   connecting to all kinds of access points.
   NetworkMnager just works, but I also hear great comments about 
 connman.
 
  I'm using KDE, yes. I've tried the tools but it doesn't seem to 
 ever scan
  for a wireless network on its own, and the scans I have been able to force
  don't result in a connection - they don't even find the network 
 I'm trying
  to attach it to.  Prior to the change, I could get WPA Supplicant to
  connect to my wireless, though I did have to have it specifically
  configured to do so. It wouldn't typically work using the tools for the
  one wireless network, while I could get it to for others (hotels, other
  places, etc.).
 
  I have added another network that is configured a little differently that I
  would prefer to connect to (over the old one), but at the moment I'll 
 take
  either. (The new 802.11g network uses WPA2; the old one uses WEP+Shared.)
 
 Assuming that you have built in your kernel or loaded the driver module for 
 your NIC and any firmware blobs have also been loaded, please show:

Yes. As I noted, it's worked before. The driver loads it find the firmware, etc.
Configuration information is below.
 
 /etc/conf.d/net 

# This is a network block that connects to any unsecured access point.
# We give it a low priority so any defined blocks are preferred.
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
#ctrl_interface_group=wheel
ap_scan=1
fast_reauth=1
# This blank configuration will automatically use DHCP for any net.*
# scripts in /etc/init.d.  To create a more complete configuration,
# please review /etc/conf.d/net.example and save your configuration
# in /etc/conf.d/net (this file :]!).

# Standard Network:
config_eth0=( dhcp )

dns_domain_lo=coal
# Wireless Network:
# TBD
#config_wlan0 ( wpa_supplicant )
#

# Enable this to use WPA supplicant; however, need to change the configuration 
of the Wireless first.
modules=( !plug !iwconfig wpa_supplicant )
#modules=( !plug wpa_supplicant )
#modules=(iwconfig)
#wpa_supplicant_wlan0=-Dwext
#wpa_timeout_wlan0=15

#modules=(iwconfig)
#iwconfig_wlan0=mode managed
#wpa_timeout_wlan0=15
 
 and  
 
 grep ^[^#] /etc/wpa_supplicant/wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
ap_scan=1
fast_reauth=1
country=US

# Home Network
#network={
#   ssid=MY-NETWORK
#   key_mgmt=IEEE8021X
#   eap=TLS
#   wep_key0=DEADBEAF0123456789ABCDEF000
#   priority=1
#   auth_alg=SHARED
#}
#
#network={
#   key_mgmt=NONE
#   priority=-999
#}

The network information is commented out as I was trying to get it to work with 
the normal user

Re: [gentoo-user] Wireless Configuration...

2011-09-02 Thread BRM
- Original Message -

 From: Canek Peláez Valdés can...@gmail.com
 On Thu, Sep 1, 2011 at 11:52 PM, BRM bm_witn...@yahoo.com wrote:
  I still haven't decided what to get for my system to replace the NIC 
 with, but the card I have should be working with my existing 802.11g network 
 already; however, it doesn't - I have had to connect my laptop via Ethernet 
 cable to my wireless bridge to get network access.
 
  /etc/init.d/net.wlan0 starts, but goes immediately inactive. From what I 
 can find on-line, this seems to have been something common after moving to 
 Base 
 Layout 2/OpenRC; however, I couldn't find anything that specified what the 
 actual solution was - I think most ended up doing a complete reinstall of 
 their 
 wicd/wpa-supplicant software - either way details were lacking.  I've 
 successfully had wpa-supplicant working in the past, and as a result of all 
 of 
 this I've tried to get it up through the other method too (iwconfig?), but 
 no success. (I think I have managed to get it to scan some, but not 
 sufficiently 
 and certainly no connections.)
 
 Did you followed the instructions at
 
 http://www.gentoo.org/doc/en/openrc-migration.xml
 
 specifically the network section?

Yes, I believe so. It's been a while since I made the migration, but the 
wireless configuration seems to have broken about the same time.

The wired configuration works just fine, and the guide mentions nothing about 
Wireless changes - e.g. WPA Supplicant - and that's where the problem is.
 
  Anyone see this issue and know what the solution is? I'd like to at 
 least get my 802.11g access back - the current setup is a bit of a pain and 
 very 
 limiting.
 
 Since you use a laptop, I will assume you have either KDE, GNOME or
 Xfce. If that's the case, why don't you try NetworkManager or connman,
 and use the GUI thingy to do the work for you? I haven't manually
 configured a wireless network in years, and I have been the last three
 months traveling with my laptop literally all over the world,
 connecting to all kinds of access points.
 NetworkMnager just works, but I also hear great comments about connman.

I'm using KDE, yes. I've tried the tools but it doesn't seem to ever scan for a 
wireless network on its own, and the scans I have been able to force don't 
result in a connection - they don't even find the network I'm trying to attach 
it to.  Prior to the change, I could get WPA Supplicant to connect to my 
wireless, though I did have to have it specifically configured to do so. It 
wouldn't typically work using the tools for the one wireless network, while I 
could get it to for others (hotels, other places, etc.).

I have added another network that is configured a little differently that I 
would prefer to connect to (over the old one), but at the moment I'll take 
either. (The new 802.11g network uses WPA2; the old one uses WEP+Shared.)

Ben




[gentoo-user] Wireless Configuration...

2011-09-01 Thread BRM
I still haven't decided what to get for my system to replace the NIC with, but 
the card I have should be working with my existing 802.11g network already; 
however, it doesn't - I have had to connect my laptop via Ethernet cable to my 
wireless bridge to get network access.

/etc/init.d/net.wlan0 starts, but goes immediately inactive. From what I can 
find on-line, this seems to have been something common after moving to Base 
Layout 2/OpenRC; however, I couldn't find anything that specified what the 
actual solution was - I think most ended up doing a complete reinstall of their 
wicd/wpa-supplicant software - either way details were lacking.  I've 
successfully had wpa-supplicant working in the past, and as a result of all of 
this I've tried to get it up through the other method too (iwconfig?), but no 
success. (I think I have managed to get it to scan some, but not sufficiently 
and certainly no connections.)


Anyone see this issue and know what the solution is? I'd like to at least get 
my 802.11g access back - the current setup is a bit of a pain and very limiting.

Thanks!

Ben




Re: [gentoo-user] Re: Openoffice being replaced?

2011-08-01 Thread BRM
- Original Message -

 From: Grant Edwards grant.b.edwa...@gmail.com
 Subject: [gentoo-user] Re: Openoffice being replaced?
 On 2011-07-29, BRM bm_witn...@yahoo.com wrote:
 From: Paul Hartman paul.hartman+gen...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Sent: Thursday, July 28, 2011 10:41 AM
 Subject: Re: [gentoo-user] Openoffice being replaced?
 
 On Thu, Jul 28, 2011 at 9:13 AM, Dale rdalek1...@gmail.com 
 wrote:
  I noticed this today:
 
  The following mask changes are necessary to proceed:
  #required by @selected, required by @world (argument)
  # /usr/portage/profiles/package.mask:
  # Tom Chv??tal scarab...@gentoo.org (27 Jul 2011)
  # Old replaced packages. Will be removed in 30 days.
  # app-office/openoffice - app-office/libreoffice
  # app-office/openoffice-bin - app-office/libreoffice-bin
  # app-text/wpd2sxw - app-text/wpd2odt
 =app-office/openoffice-3.2.1-r1
 
 
  Does this mean that libreoffice is going to replace OOo in the 
 tree?
 
 Looks like it. It has already replaced it on all my computers.
 
 Gentoo's OpenOffice has included the go-oo patches for a long time
 anyway, which were the big thing changed about LibreOffice
 
 [...]
 
 I would say switch to LibreOffice and don't look back. :)
 
  I wouldn't. While LibreOffice may have some advances at the moment,
  I'm still interested in following main-line OOo - now being setting
  under Apache.
 
 So you don't use the gentoo OOo ebuilds?  AFAICT, they're a lot closer
 to being libreoffice than to being mainline OOo.
 
  Please do not force us to convert from OO to LO.
 
 If you use the gentoo ebuilds, then you mostly already have.
 Gentoo OOo  = OOo + Go-Oo
 LibreOffice = OOo + Go-Oo

There's other stuff in LibreOffice too. But I'd still much rather be using OOo 
than LibreOffice.

And I'd rather drop the GO-OOo patches, but I don't think there's an option for 
that in emerge.
 
  I have no problem with separate installs for each, but there will be
  those (like me) that want the official OO installs.
 
 But, what you get using the Gentoo ebuilds isn't the official OOo
 install.  If you're running official OOo, then youre not using the
 Gentoo ebuilds, so why do you care what those ebuilds produce?
 
 One of the things I like about LibreOffice is the reduced
 dependancies.  Even with the gnome USE flag turned off, OOo pulls in
 some big gnome dependancies that I don't want.  WTF does an office
 suite need libgweather?

And I'm sure the Apache OO guys will fix that in due time as well.

All I'm saying is that I want to stick with the Apache OOo in the long run, not 
LibreOffice.
Users can switch to the LibreOffice install if they desire, but there's no 
reason for force those that want to continue with OOo to move over.

Ben



Re: [gentoo-user] Openoffice being replaced?

2011-07-29 Thread BRM
From: Paul Hartman paul.hartman+gen...@gmail.com
To: gentoo-user@lists.gentoo.org
Sent: Thursday, July 28, 2011 10:41 AM
Subject: Re: [gentoo-user] Openoffice being replaced?

On Thu, Jul 28, 2011 at 9:13 AM, Dale rdalek1...@gmail.com wrote:
 I noticed this today:

 The following mask changes are necessary to proceed:
 #required by @selected, required by @world (argument)
 # /usr/portage/profiles/package.mask:
 # Tomáš Chvátal scarab...@gentoo.org (27 Jul 2011)
 # Old replaced packages. Will be removed in 30 days.
 # app-office/openoffice - app-office/libreoffice
 # app-office/openoffice-bin - app-office/libreoffice-bin
 # app-text/wpd2sxw - app-text/wpd2odt
=app-office/openoffice-3.2.1-r1


 Does this mean that libreoffice is going to replace OOo in the tree?

Looks like it. It has already replaced it on all my computers.

Gentoo's OpenOffice has included the go-oo patches for a long time
anyway, which were the big thing changed about LibreOffice (those
patches included in mainline), and using the two I can honestly say
there's really no difference as far as I can tell, aside from the
splash screen. Somebody posted about some Sun templates a while
back... maybe something proprietary like that is changed, but
OpenTemplate.org is meant to replace those anyway.

I would say switch to LibreOffice and don't look back. :)



I wouldn't. While LibreOffice may have some advances at the moment, I'm still 
interested in following main-line OOo - now being setting under Apache.

Please do not force us to convert from OO to LO. I have no problem with 
separate installs for each, but there will be those (like me) that want the 
official OO installs.

That said, I have more confidence in Apache managing OO than I do TDF with LO, 
having observed TDF's mailing lists for several months (before finally dropping 
off in favor of Apache OO). I know others will have different opinions; but 
that (again) is why we should allow those using OO to remain using OO.


Ben




Re: [gentoo-user] Wireless N PCMCIA/CardBus Recommendations...

2011-07-18 Thread BRM
- Original Message 

 From: Paul Hartman paul.hartman+gen...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Sent: Fri, July 15, 2011 5:24:48 PM
 Subject: Re: [gentoo-user] Wireless N PCMCIA/CardBus Recommendations...
 
 On Fri, Jul 15, 2011 at 2:54 PM,  ny6...@gmail.com wrote:
  I have  always had good luck with Atheros-based cards. HTH.
 
 Me too. Plus, they  are usually more likely to be able to do the fun
 stuff like master mode,  monitor mode, packet injection...

Any specific PCMCIA or mini-PCI (not mini-PCIe) cards you all would recommend 
then - either Atheros (preferred) or Intel?

I have only been able to find a couple - namely a few by HQRP, Everex, and 
TP-Link.
I haven't been able to find much info on HQRP, and their cards seem to be 
2.4GHz 
only - without proper 802.11n support.
Same for Everex and most others random ones.
TP-Link seems to support everything, but not sure about - Amazon reviews seem 
good (for the most part), but I have had trouble getting to their website for 
whatever reason - perhaps the Great Firewall of China is at play.

At least the Intel ones I come across on Amazon seem not support Wireless-N or 
be mini-PCIe.

TIA,

Ben




[gentoo-user] Wireless N PCMCIA/CardBus Recommendations...

2011-07-14 Thread BRM
After several years, I am not getting around to upgrading my wireless router - 
from a Linksys WRT54G to a Cisco Linksys E4200.
While I am at it, I am also considering getting a new wireless card for my D600 
laptop to at least augment the internal b43-legacy supported Broadcom 43xx card 
that generally works, but is also a pain to keep working.

While it's easy to find a USB Wireless card, I'm not really interested in them 
- 
the form factor is generally prone to breaking and my D600 laptop only has two 
USB-ports (its main flaw), one of which I use for a USB mouse when its not in 
the docking station - when it is, I can't use either as they are both in the 
back and blocked by the docking station - so a USB wireless is kind of 
problematic as I would then have to take it out to dock the laptop (undesirable 
to say the least).

So that leaves me with using one of the open PCMCIA card slots. I have two 
wired 
PCMCIA adapters, useful mostly for multi-network and diagnostics; so the slots 
are open.

I'd like to keep the cost down - $50 USD or less; and am pretty open to 
different brands. However, I've found the lookups - at least linuxwireless.org 
- 
to be a little troublesome in identifying to actual cards, so I'm looking for 
some good recommendations.

Thus far I've looked at:

Cisco-Linksys WPC600N
Cisco-Linksys WEC600N
Cisco-Linksys WPC300N

But I haven't been able to determine if they are supported under Linux.
Open to other suggestions too - so long as PCMCIA compatible.

Thanks,

Ben




Re: [gentoo-user] Wireless N PCMCIA/CardBus Recommendations...

2011-07-14 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 On Thu, 14 Jul 2011 09:42:49 -0700 (PDT), BRM wrote:
 
  While I am at  it, I am also considering getting a new wireless card for
  my D600 laptop  to at least augment the internal b43-legacy supported
  Broadcom 43xx card  that generally works, but is also a pain to keep
   working.
 
 [snip]
 
  So that leaves me with using one of the open  PCMCIA card slots. I have
  two wired PCMCIA adapters, useful mostly for  multi-network and
  diagnostics; so the slots are open.
 
 What format  is the internal card? If it's mini-PCI, a standard Intel card
 may be a better  choice.

Yes, I believe it's mini-PCI - two slots; only one used that I'm aware of.

Ok, for 802.11a/b/g; not sure how well it would be for 802.11n.

Ben




Re: [gentoo-user] Goodbye, Gentoo

2011-05-27 Thread BRM
From: Mark Shields laebsh...@gmail.com

To: gentoo-user@lists.gentoo.org
Sent: Thu, May 26, 2011 9:57:26 PM
Subject: Re: [gentoo-user] Goodbye, Gentoo


On Thu, May 26, 2011 at 6:28 PM, Kevin O'Gorman kogor...@gmail.com wrote:

It looks like it's time to take Gentoo off of my main machine.  I feel a 
little 

sad about it, or I'd just quietly go away.

A few months ago, an update made the machine headless -- well, it could no 
longer bring up X but I could use the console-mode for admin, and log in via 
SSH 

from my laptop and run GUI programs.  I was busy at the time, first deciding 
and 

then implementing my retirement, so I let it go.

Now, a couple of months into my retirement, I'm trying to fix things up, and 
the 

latest Gentoo live disk cannot talk to my monitor at all.  Whatever it's 
trying 

is unacceptable to the HD monitor I've had on there for a year, and I can't 
even 

run the consoles.  The video card is an ATI Rage XL on the motherboard.  Like 
the rest of the machine, it's vintage 2000, so maybe support got dropped.  
But 

I'm not inclined to drop the machine -- it was the ballyhooed thing in Linux 
Journal in 2002 when I finished my PHD, so I put together these pieces: 

* Two XEON chips.  I didn't know it right away but that means 4 cores.  They 
are 

old Pentium IV-based 32-bit chips.  I got the slowest still being made, so 
the 

clock speed is 1.6 GHz.  On 4 cores, it's not bad at all. 

*  2GB of DDR ECC memory
* about a dozen hard drives (some old, but mostly 500GB - 2TB Sata drives), I 
feel it's still worthy of respect.  Some of these are in EZ-Dock docking 
stations and are used for rotating backups (including off-site).  The main 
directories are on hardware RAID 1 so I have ongoing redundancy.
* a Smart UPS 1500 for everything except the laser printer.

So, since I am familiar with Ubuntu from work, and have it on a couple of 
laptops, I'm installing from the Ubuntu 11.04 live disk (video is just fine).

The real headache is all the stuff I'm going to have to port.

1) Apache and dynamic (Python CGI) web site.
2) Postfix
3) About a dozen accounts that just do wget(1) data gathering triggered by 
the 

cron daemon.
4) DNS (I run my own domain on a commercial DSL account)
5) NTP client and server
6) Whatever else I forgot I set up over the years.

My original reason for using Gentoo is that this machine was pretty exotic 
when 

I bought it, and I wanted to be able to tweak the compiler to get the most 
out 

of it.  I can still do that for specific applications I'm working on, but 
otherwise it's really a non-issue now.  I have gotten pretty tired of updates 
that take over 48 hours to compile, and the occasional mess-up that once or 
twice led me to rebuild with empty-tree and took a week or so.  


So I guess I shouldn't complain (and I'm not).  I'm just not in the target 
market for Gentoo any more.  It was fun, though.
-- 
Kevin O'Gorman, PhD





You let a small problem like the latest live cd not booting your system scare 
you away?

Have you tried using an older live cd?  If it's a video issue, maybe detecting 
your monitor wrong, how about turning on the framebuffer (there's an option 
for 

that)?
It's doable man, don't give up.

Probably needs to switch to the open source radeon driver instead of the ATI 
binary driver if he hasn't already too.
My 2004 laptop had that issue a couple years back. I initially installed the 
ATI 
driver (which I haven't seemed to be able get rid of now), and then they (ATI) 
dropped support for the R250 line-up.
I switched over the open source radeon driver and all works just fine and dandy.

Ben




Re: [gentoo-user] Re: How's the openrc update going for everyone?

2011-05-17 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
  Okay - that's not entirely KDE's  problem; though it would have helped a
  long way with the KDE4 transition  if they kept a few people working on
  those issues.
 
 How would you  feel if you were a KDE dev told we're all going to play
 with the cool new  toys now, but we want you to stay here and look
 after the boring musty old  stuff.? It would be bad enough if you were
 being paid for it.

Many software developers are exactly in that position. So what? it's what you 
do 
when you want to maintain something.
That's the also very much the case with numerous kernel developers - they work 
to keep older versions going as Linus and team move to the next version.
So yes, there are even volunteers that will do it.
 
   The big issue is that in moving to sole development of KDE4, distros
   started to drop KDE3 and replace it with KDE4. For example, Kubuntu
  8.04  TLS dropped KDE3 and used KDE4 long before KDE4 was really user
  worthy -  long before KDE was calling it user worthy.
 
 I think that says more about  Ubuntu than KDE, after all ,they'd done a
 similar thing with GNOME/Unity  now.

There were other distros too. Gentoo dropped KDE3 around 4.3.
 
  But KDEs actions
  of moving sole development to KDE4  prompted most distributions to do
  likewise.
 
 Many distros,  especially the enterprise focussed ones like SUSE, kept 3.5
 around for quite  a while.
 
  Had they kept a small team working on at least the build  issues until
  KDE4 reached 4.3 then the transition would have likely gone  a lot
  smoother.
 
 True, but no one expected it to take that long to  get ready, and
 diverting resources to look after 3.5 would have meant it  taking even
 longer.
 
   So install a distro that still supports  KDE3 if that's what you  want
   or need. KDE 3.5.10 is still  there, it hasn't been withdrawn from  the
   shelves. You're  hardly likely to use Gentoo for such users, so lack
   of core support  for 3.5 in Gentoo is not an issue either.
   
  
  While I  am not personally interested in it, please name one.
  
  Gentoo  doesn't support KDE3 any more. You have to go to Trinity to get
  the  newer, forked KDE3 series. Last I heard they were equivalent to a
  3.5.12  or so; but I haven't seen anything on the Desktop list for a
  while about  Trinity.
  
  Needless to say, you may be very hard pressed to find  a modern,
  up-to-date distribution that offers KDE3 support.
 
 If it  defaulted to KDE 3.5, it would be neither modern nor up to date.
 But at the  time of the transition, when KDE4 was still too flakey for
 many, there were  several - openSUSE for one.

Difference between modern, up-to-date and functional versus modern, 
up-to-date, and bleeding-edge.
If you are aiming for bleeding-edge, then yes, moving to KDE4 at 4.0 would have 
been fine.
But most don't use or want to use bleeding edge - they want functional. In both 
cases they still want modern and up-to-date.

Ben




Re: [gentoo-user] Re: How's the openrc update going for everyone?

2011-05-13 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 On Thu, 12 May 2011 16:44:21 -0500, Dale wrote:
  Your questions don't  disprove what me and others have posted.  As I
  have said on the KDE  mailing list, KDE made a serious mistake dropping
  KDE3 before KDE4 was  ready.
 How exactly did they drop it? It's still available from
 ftp://ftp.kde.org/pub/kde/stable/3.5.10 even now and some  distros still
 have packages for it. It never went away, you can still use it  if you
 wish.
 

Ok, so personally I very much like KDE4 - been using it since 4.3 was 
stabilized 
on Gentoo and love it.
That said...

KDE did seem to drop the ball a bit with their management of the transition 
from 
KDE3 to KDE4.

To start with, look at the reason why Gentoo dropped KDE3 from Portage - KDE 
stopped maintaining it and the builds started breaking as underlying library 
dependencies changed.
So, sure you may be able to pull a binary build from KDE and use it; or (more 
likely) you'll spend hours and hours getting everything setup right - with all 
the correct versions of the dependencies, etc - to get it up and running.

In other words, when KDE decided to move on to KDE4 full time they left the 
release as it was and it has since gotten harder to use by those that want to 
use it.
Okay - that's not entirely KDE's problem; though it would have helped a long 
way 
with the KDE4 transition if they kept a few people working on those issues.

The big issue is that in moving to sole development of KDE4, distros started to 
drop KDE3 and replace it with KDE4. For example, Kubuntu 8.04 TLS dropped KDE3 
and used KDE4 long before KDE4 was really user worthy - long before KDE was 
calling it user worthy. But KDEs actions of moving sole development to KDE4 
prompted most distributions to do likewise. As a result, KDE got a lot of flack 
for KDE4 not being ready for users b/c it wasn't - which KDE readily recognized 
and admitted.

Had they kept a small team working on at least the build issues until KDE4 
reached 4.3 then the transition would have likely gone a lot smoother. The 
userbase for KDE4 would have been smaller, so it may have taken a little longer 
to get some of the user feedback; but it would have greatly helped with aiding 
distributions and users making the transition instead of feeling like they were 
dumped from KDE 3.5.10 into KDE 4.0.1.

 
 So install a distro that still supports KDE3 if that's what you  want or
 need. KDE 3.5.10 is still there, it hasn't been withdrawn from  the
 shelves. You're hardly likely to use Gentoo for such users, so lack  of
 core support for 3.5 in Gentoo is not an issue either.
 

While I am not personally interested in it, please name one.

Gentoo doesn't support KDE3 any more. You have to go to Trinity to get the 
newer, forked KDE3 series. Last I heard they were equivalent to a 3.5.12 or so; 
but I haven't seen anything on the Desktop list for a while about Trinity.

Needless to say, you may be very hard pressed to find a modern, up-to-date 
distribution that offers KDE3 support.

Ben




Re: [gentoo-user] How's the openrc update going for everyone?

2011-05-13 Thread BRM

From: Daniel da Veiga danieldave...@gmail.com
On Tue, May 10, 2011 at 18:55, Dale rdalek1...@gmail.com wrote:
I was curious, what's the results of the openrc update for people that have 
done 

theirs?  Is it pretty simple and just works or are there issues?  I'm 
mostly 

interested in x86 and amd64 since that is what I have.  Just a simple works 
here 

and I'm X86 or amd64 would be nice.  List issues if you had any.
For me it was a breeze.
I have two machines running testing for some time and a server that was an 
year 

behind in updates. I decided to update it now. The easy part was the OpenRC 
migration. The hard was mysql (was still 4.1), php, apache (gave up and 
installed lighttpd instead) and (oh yeah) kernel.


I just finished update my server. OpenRC updated without any problems.
My laptop had to have its compiler updated before I could could do the sync and 
update - guess I didn't finish the previous update and KDE wanted GCC 4.4 
instead of 4.3.
I should be able to get it going tonight hopefully...

My desktop is a few months behind - still gotta get it fixed from a previous 
failed update.
But won't have the time for at least another month. I may end up just 
rebuilding 
it if the updates are too troublesome - it may prove faster.

Ben




Re: [gentoo-user] Re: How's the openrc update going for everyone?

2011-05-11 Thread BRM
- Original Message 

 From: Alan McKinnon alan.mckin...@gmail.com
  I still don't understand why the kde folks went from something  that
  worked extremely well to their current state. Baffling.
 
 KDE3  and KDE4 are not the same thing. 
 KDE4 is not the next version of  KDE3.
 
 You must consider KDE4 to be a completely new product, unrelated to  KDE3 in 
 any meaningful way except that many KDE4 devs used to work on a  different 
 project called KDE3.
 
 Like all software, KDE4 is not for  everyone - like you for example. But 
 there's nothing stopping you from  maintaining KDE3 yourself.
 
 Why did the devs switch? Market pressures  really. If you don't spot emerging 
 trends and follow them early, you run the  risk of becoming redundant very 
 quickly. Ask Microsoft, they know all about  this.
 
 They went from the undisputed behemoth market leader to staring the  very 
 real 

 threat of total obsolescence in three very short years.
 
 KDE  devs decided to take the risk and make the jump ahead of the curve.
 

Very much agreed. Ever wonder why what Apple and Microsoft are doing seems to 
simply be copying what KDE did with KDE4?
Yeah - KDE is on the forefront of the desktop right now, paving the path for 
how 
its going to be used by essentially everyone as a result.

Ben




Re: [gentoo-user] [OT] bash script error

2011-05-09 Thread BRM
Well, I saw a lot of advice on this but no real solution - just some debugging 
help.

At least from my own experience with Bash Scripting, I find that you can never 
use enough braces when referencing variables.

So, the script should read:

url=http://mypage;
curl_opts=-x
curl ${url} -d \mydata\ ${curl_opts}


I would probably even go one further and use quotes in the final line as well, 
thus producing:

curl ${url}-d \mydata\ ${curl_opts}

Please note that the single and double quotes have meanings as far as expansion.

Also, the guys on IRC (#bash) are quite helpful - a great resource in addition 
to 'man bash'.

$0.02

Ben


- Original Message 
 From: Xi Shen davidshe...@googlemail.com
 To: gentoo-user gentoo-user@lists.gentoo.org
 Sent: Mon, May 9, 2011 1:44:58 AM
 Subject: [gentoo-user] [OT] bash script error
 
 It is not specific to Gentoo. But do not know where to search or post it  :)
 
 My script looks like:
 
 url=http://mypage;
 curl_opts=-x  ''
 curl $url -d \mydata\ $curl_opts
 
 If I execute it, I got an error  from curl, saying it cannot resolve
 the proxy ''.
 
 But If I modify the  script to:
 
 url=http://mypage;
 curl $url -d \mydata\ -x ''
 
 It works  fine.
 
 I guess there's something wrong with the argument expansion. Just  do
 not know how to fix it. Please help.
 
 
 -- 
 Best Regards,
 Xi  Shen (David)
 
 http://twitter.com/davidshen84/
 
 



Re: [gentoo-user] Can a forced volume check be interrupted?

2011-04-12 Thread BRM
Probably, but why would you want to? it fixes any errors, and makes the file 
system relatively clean again so that things function well - and things don't 
get lost.
If you skip it, you risk data corruption on disk.

If you know it's going to run, then you can do one of two things:
1) I believe there is an option to ignore it entirely
2) If you use Interactive mode then you can skip that step.

Both of those, however, require that you know (or assume) its going to run fsck.

Ben




- Original Message 
 From: Grant emailgr...@gmail.com
 To: Gentoo mailing list gentoo-user@lists.gentoo.org
 Sent: Tue, April 12, 2011 1:31:31 PM
 Subject: [gentoo-user] Can a forced volume check be interrupted?
 
 Sometimes the ext3 forced volume check at boot triggers at an
 inopportune  time.  Is there a way to skip it and let it run at the
 next  boot?
 
 - Grant
 
 



Re: [gentoo-user] Can a forced volume check be interrupted?

2011-04-12 Thread BRM
- Original Message 

 From: Grant emailgr...@gmail.com
  Probably, but why would you want to? it fixes any errors, and makes the  
file
  system relatively clean again so that things function well -  and things 
don't
  get lost.
  If you skip it, you risk data  corruption on disk.
 
  That misses the point.  I have rebooted  sometimes just for a quick
  change, possibly to try a different kernel,  and intending to reboot
  several times.  Then whoops! it starts a long  fsck scan, not to repair
  damage, but just because some counter went to  zero.  What a waste.
 
  It's like insisting on an oil change  exactly every 3000 miles.  No,
  sorry, I will wait until it is convenient  for *me*, not the odometer.
 
  So his question is, once the fsck  has started, can he ^C to bomb it
  off, or do anything else to skip what  has started?
 
 Exactly.  I couldn't get it to stop with ^C or i or  I.
 

No. You can't. Nor do you want to at that point.
Once it has started it really should run until completion otherwise you really 
risk data corruption.
If you want to stop it, you have to prevent it from starting in the first place.

Ben




Re: [gentoo-user] Can a forced volume check be interrupted?

2011-04-12 Thread BRM
- Original Message 

 From: Grant emailgr...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Sent: Tue, April 12, 2011 3:29:35 PM
 Subject: Re: [gentoo-user] Can a forced volume check be interrupted?
 
   Probably, but why would you want to? it fixes any errors, and  makes the
 file
   system relatively clean again so  that things function well -  and 
things
 don't
get lost.
   If you skip it, you risk data  corruption on  disk.
  
   That misses the point.  I have rebooted   sometimes just for a quick
   change, possibly to try a different  kernel,  and intending to reboot
   several times.  Then whoops!  it starts a long  fsck scan, not to repair
   damage, but just  because some counter went to  zero.  What a waste.
  
It's like insisting on an oil change  exactly every 3000 miles.   No,
   sorry, I will wait until it is convenient  for *me*, not  the odometer.
  
   So his question is, once the  fsck  has started, can he ^C to bomb it
   off, or do anything  else to skip what  has started?
 
  Exactly.  I couldn't get  it to stop with ^C or i or  I.
 
 
  No. You can't. Nor do  you want to at that point.
  Once it has started it really should run  until completion otherwise you 
really
  risk data corruption.
  If  you want to stop it, you have to prevent it from starting in the first  
place.
 
 Yeah, that can really be a drag.  Last night my Gentoo HTPC  checked
 the 2TB drive for 2 hours when I rebooted after a movie we  were
 watching froze.
 

As I said, if you are anticipating such a situation - or like the situation you 
are in - you can use the interactive boot or other methods to keep it from 
running to start with.
That is your best bet, and your safest.

Ben




Re: [gentoo-user] LVM for data drives but not the OS

2011-04-07 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 On Thu, 07 Apr 2011 05:22:41 -0500, Dale wrote:
  I want to do it this  way because I don't trust LVM enough to put my OS 
  on.  Just my  personal opinion on LVM.
 This doesn't make sense. Your OS can be  reinstalled in an hour or two,
 your photos etc. are  irreplaceable.
 

Makes perfect sense to me as well.

Having installed LVM - and then removed it due to issues; namely, the fact that 
one of the hard drives died taking out the whole LVM group, leaving the OS 
unbootable, and not easily fixable. There was a thread on that (started by me) 
a 
while back (over a year).

So, perhaps if I had a RAID to underly so I could mirror drives under LVM for 
recovery I'd move to it again. But otherwise it is just a PITA waiting to 
happen.

Ben




Re: [gentoo-user] LVM for data drives but not the OS

2011-04-07 Thread BRM
- Original Message 

 From: Joost Roeleveld jo...@antarean.org
 On Thursday 07 April 2011 06:20:55 BRM wrote:
  - Original Message  
   From: Neil Bothwick n...@digimed.co.uk
   On Thu, 07 Apr 2011 05:22:41 -0500, Dale wrote:
 I want to do it this  way because I don't trust LVM enough to put  my
OS
on.  Just my  personal  opinion on LVM.
   
   This doesn't make sense. Your OS can  be  reinstalled in an hour or two,
   your photos etc. are   irreplaceable.
  
  Makes perfect sense to me as well.
  
  Having installed LVM - and then removed it due to issues; namely, the  fact
  that one of the hard drives died taking out the whole LVM group,  leaving
  the OS unbootable, and not easily fixable. There was a thread on  that
  (started by me) a while back (over a year).
  
  So,  perhaps if I had a RAID to underly so I could mirror drives under LVM
   for recovery I'd move to it again. But otherwise it is just a PITA  waiting
  to happen.
  
  Ben
 
 Unfortunately, any method  that spreads a filesystem over multiple disks can 
 be 

 affected if one of  those disks dies unless there is some mechanism in place 
 that can handle the  loss of a disk.
 For that, RAID (with the exception of striping, eg. RAID-0)  provides that.
 
 Just out of curiousity, as I never had the need to look  into this, I think 
 that, in theory, it should be possible to recover data  from LVs that were 
 not 

 using the failed drive. Is this assumption correct or  wrong?
 

If you have the LV configuration information, then yes. Since I managed to find 
the configuration information, I was able to remove the affected PVs from the 
VG, and get it back up.
I might still have it running, but I'll back it out on the next rebuild - or if 
I have a drive large enough to do so with in the future. I was wanting to use 
LVM as a bit of a software RAID, but never quite got
that far in the configuration before it failed. It does do a good job at what 
it's designed for, but I would not trust the OS to it either since the LVM 
configuration is very important to keep around.

If not, good luck as far as I can tell.

Ben




Re: [gentoo-user] LVM for data drives but not the OS

2011-04-07 Thread BRM
- Original Message 

 From: Joost Roeleveld jo...@antarean.org
 On Thursday 07 April 2011 06:52:26 BRM wrote:
  - Original Message  
  
   From: Joost Roeleveld jo...@antarean.org
   
   On Thursday 07 April 2011 06:20:55 BRM wrote:
 - Original Message  

  From: Neil Bothwick n...@digimed.co.uk
  
 On Thu, 07 Apr 2011 05:22:41 -0500, Dale  wrote:
   I want to do it this  way because  I don't trust LVM enough
   to put   my
  
  OS
   on.  Just my  personal  opinion on LVM.
  
 This doesn't make sense. Your OS can   be  reinstalled in an hour
 or two, your photos etc.  are   irreplaceable.

Makes perfect  sense to me as well.

Having installed LVM -  and then removed it due to issues; namely,
the  fact that  one of the hard drives died taking out the whole LVM
 group,  leaving the OS unbootable, and not easily fixable. There
 was a thread on  that (started by me) a while back (over a  year).

So,  perhaps if I had a RAID to  underly so I could mirror drives
under LVM

 for recovery I'd move to it again. But otherwise it is  just a PITA
  waiting

 to happen.

Ben
   
Unfortunately, any method  that spreads a filesystem over multiple  disks
   can be
   
   affected if one of   those disks dies unless there is some mechanism in
   place that can  handle the  loss of a disk.
   For that, RAID (with the exception  of striping, eg. RAID-0)  provides
   that.
   
Just out of curiousity, as I never had the need to look  into this,  I
   think that, in theory, it should be possible to recover  data  from LVs
   that were not
   
   using  the failed drive. Is this assumption correct or  wrong?
  
  If  you have the LV configuration information, then yes. Since I managed to
   find the configuration information, I was able to remove the affected  PVs
  from the VG, and get it back up.
  I might still have it  running, but I'll back it out on the next rebuild - 
or
  if I have a drive  large enough to do so with in the future. I was wanting
  to use LVM as a  bit of a software RAID, but never quite got
  that far in the  configuration before it failed. It does do a good job at
  what it's  designed for, but I would not trust the OS to it either since the
  LVM  configuration is very important to keep around.
  
  If not, good  luck as far as I can tell.
  
  Ben
 
 LVM isn't actually RAID.  Not in the sense that one gets redundancy. If you 
 consider it to be a  flexible partitioning method, that can span multiple 
disks, 

 then yes.
 But  when spanning multiple disks, it will simply act like JBOD or RAID0. 
 Neither  protects someone from a single disk failure.
 
 On critical systems, I tend  to use:
 DISK - RAID - LVM - Filesystem
 
 The  disks are as reliable as Google says they are. They fail or they don't.
 RAID  protects against single disk-failure
 LVM makes the partitioning  flexible
 Filesystems are picked depending on what I use the partition  for
 

The attraction to LVM for me was that from what I could tell it supported and 
implemented a software-RAID
so that I could help protect from disk-failure. I never got around to 
configuring that side of it, but that was my goal.
Or are you saying I was misunderstanding and LVM _does not_ contain 
software-RAID support?

Ben




Re: [gentoo-user] LVM for data drives but not the OS

2011-04-07 Thread BRM
- Original Message 

 From: J. Roeleveld jo...@antarean.org
 On Thu, April 7, 2011 7:31 pm, BRM wrote:
  The attraction to LVM  for me was that from what I could tell it supported
  and
   implemented a software-RAID
  so that I could help protect from  disk-failure. I never got around to
  configuring that side of it, but  that was my goal.
  Or are you saying I was misunderstanding and LVM _does  not_ contain
  software-RAID support?
 
 Unless I am mistaken, LVM  does not provide redundancy. It provides
 disk-spanning (JBOD) and basic  striping (RAID-0).
 
 For redundancy, I would use a proper RAID (either  hardware or software).
 On top of this, you can then decide to have a single  filesystem, LVM or
 even partition this.
 
 I think the confusion might  have come from the fact that both LVM and
 Linux Software Raid use the Device  Mapper interface in the kernel config
 and they are in the same  part.
 
 Also, part of the problem is that striping is also called RAID-0.  That, to
 people who don't fully understand it yet, makes it sound like it is  a
 RAID.
 It actually isn't as it doesn't provide any redundancy.

I think the issue comes from the fact that LVM2 supports Mirroring without an 
underlying RAID controller:

http://tinyurl.com/3woh2d7
http://en.wikipedia.org/wiki/Logical_Volume_Manager_%28Linux%29
http://www.gossamer-threads.com/lists/gentoo/performance/59776

Which would be a redundancy.

 
 I  do hope you didn't loose too much important data when you had this  issue.
 

No, I didn't loose any important data (fortunately). If I did, I would have 
paid 
for the drive to be recovered; it was mostly portage, var/tmp, some extra 
sandbox stuff, kind of things.

Ben




Re: [gentoo-user] Any bought a laptop that Just Works?

2011-03-31 Thread BRM
At work, we've had a lot of success with Lenovo's. My T61p (3 years old) is 
fully supported by Linux - wireless included - according the documentations; I 
can't quite verify as I haven't been able to transition it (yet) to Linux.
Colleagues haven't had issues with another model, but I'm not sure which it is 
off-hand.

Ben



From: Kevin O'Gorman kogor...@gmail.com
To: gentoo-user@lists.gentoo.org
Sent: Thu, March 31, 2011 10:15:36 AM
Subject: Re: [gentoo-user] Any bought a laptop that Just Works?




On Wed, Mar 30, 2011 at 11:31 AM, Robin Atwood robin.atw...@attglobal.net 
wrote:

I am in the market for a new laptop and would be interested if anyone else on 
the list had recently bought a laptop in which all the hardware worked out of 
the box with Linux. I am most concerned about WiFi/audio/webcam, the finer 
points of hibernation are of lesser concern. Currently I have a Linux 
Certified machine but I want to avoid shipping costs to the UK.




TIA
-Robin


I bought a Gateway NV55C late last year, and Ubuntu went on without a hitch: 
sound, movies, webcam, wifi, ethernet, second monitor and all.

The only thing to dislike is that the machine does not have an indicator LED 
for 
caps lock -- on Win 7 it uses an on-screen icon each time the status changes. 


-- 
Kevin O'Gorman, PhD



Re: [gentoo-user] web redirection

2011-02-07 Thread BRM
Well, testing from work on a Windows system it seems to work ok.
Don't have a Linux system with a web-browser quite accessible to use at the 
moment; so may be it's a difference in platforms?

$0.02,

Ben



- Original Message 
 From: James wirel...@tampabay.rr.com
 To: gentoo-user@lists.gentoo.org
 Sent: Mon, February 7, 2011 10:53:32 AM
 Subject: [gentoo-user] web redirection
 
 Hello,
 
 
 I'm having trouble connecting to a url
 that previously  worked. It's a Microsoft
 based web server, over which I have no  control.
 
 confirmation with abberant behavior, as I
 have tried  seamonkey, konqueror and Firefox,
 would be appreciated
 
 
 www.flvs.net
 
 (just click  the login button)
 
 Firefox has detected that the server is redirecting 
 the request for this address in a way that will never  complete.
 
 
 Is this some new MS scourage?
 
 
 any work around  is appreciated.
 
 
 James
 
 
 
 
 



Re: [gentoo-user] Re: Emerge Problems...

2011-02-02 Thread BRM
From: Peter Humphrey pe...@humphrey.ukfsn.org
On Tuesday 01 February 2011 20:43:43 BRM wrote:
 And you're doing a typically manual process for updating all the
 systems - update your server first, then any rsync clients. Fine 
 dandy if that is your process - but it's not mine. I may update my
 laptop twice as often as the other two, especially if I want to play
 with some software or try something out, or fix a bug, or get a
 later version of KDE. The server gets updated may be once a month,
 while the laptop is either once a month or at whim when I want
 something that just came out.
 
 It's not harder to do it this way, just a different method. The
 original rsync script worked perfectly fine; the broken update I did
 when I lost it is what started this whole thread.
What's wrong with keeping your server's portage cache up to date? You don't 
have 

to update the server from it if you don't want to, but if the cache is out of 
date it isn't being much of a server.
I recommend Occam's Razor.
-- 

Here's the problem with the Server's /usr/portage being hosted by rsync:

- Server sync's its portage against gentoo mirrors (emerge --sync)
- Update Server (emerge world -vuDN)
- Client sync's its portage against server portage mirror (emerge --sync)
- Update Client (emerge world -vudN)

So if you are manually updating the server, then no problem - you control the 
timing.

Now all that seems to work fine until you introduce the automatic updates of 
the 
server's portage, e.g. via cron.
Suppose the Server Update doesn't complete due to a build error. If the server 
automatically updated its portage during the build time then when you go to 
redo 
the build you may end up with another set of updates to push in, meanwhile you 
haven't finished the last round. Sure, the clients will still update just fine 
- 
it's not a problem for _them_, it's a problem for the server.

So, Occam's Razor - store the rsync hosted portage mirror separately from the 
server's /usr/portage copy, and sync the server against the local rsync just 
like all the clients.
The rsync hosted mirror can now be updated at will without any repercussions to 
any install, and the server works just like any of the clients; so now you end 
up with:

- sync server portage mirror against gentoo mirrors at scheduled intervals, 
e.g. 
every day at midnight
- Server sync's its portage against server portage mirror (emerge --sync)
- Update Server (emerge world -vuDN)
- Client sync's its portage against server portage mirror (emerge --sync)
- Update Client (emerge world -vudN)

The server is now completely 100% independent of the portage it is hosting for 
everyone else on the internal network, and you can get through a full update - 
resolving all issues, etc. - before any re-syncing.

So then the question becomes why run the night cron to update the server's 
portage mirror?
B/c I am not updating or installing software on my server as frequently as my 
other systems; so it doesn't need to be in sync itself as frequently.

Ben




Re: [gentoo-user] Re: Emerge Problems...

2011-02-01 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
 Nils Holland wrote:
  On 21:35 Mon 31 Jan , Francesco  Talamona wrote:
  On Monday 31 January 2011, BRM  wrote:
  I just wrote a new  script last night, but I'm still not sure that all
  of the   parameters are correct
 
   Why not something proven and reliable like emerge --sync?

  In fact, what I always do is sync one of my machines with  an official
  Gentoo mirror via emerge --sync, and then I just use rsync  to
  distribute the updated tree to all my other local machines as  in:
  
  rsync --delete -trmv  /usr/portage/user@dest_host:/usr/portage
  
  One  might want to ask rsync to exclude the distfiles directory,
  but I always  include it as it oftentimes saves me the download of a
  file I've already  downloaded during an emerge on another machine.
  
  In any case,  locally updating my tree via rsync has always worked fine
  for me.  Leaving the --delete option to rsync out, however,
  immediately leads  to problems, with various ebuild-related error
  messages on subsequent  emerges. I can imagine that the OP did, in
  fact, update his tree in  such an inconsistent manner, but that can
  certainly be fixed, with the  surest way being a emerge --sync using
  an official mirror.
  

Definitely missed the delete option on the new script.

 Maybe I am missing  something but I have two machines here.  I sync to the 
Gentoo servers with  the main rig and then sync the second rig from the main 
rig.  All you have  to do is start the rsync service and set the IP address in 
the SYNC line in  make.conf on the second rig.  This is my rsyncd.conf on the 
main  rig:
 
 # Simple example for enabling your own local rsync  server
 [gentoo-portage]
 path = /usr/portage
 comment = Gentoo Portage  tree
 exclude = /distfiles /packages
 
 If you want to include distfiles,  just remove it from the exclude line.  For 
my distfiles, I run  http-replicator to fetch those.  It works pretty  well.
 

If the machine you are hosting portage on (via rsync) is fast enough to 
complete 
all updates within the update cycle (e.g. sync'ing 1 time a day, so it has 
23:59:59 to complete all builds) then it is likely not a problem to do as that.

If the machine is not fast enough - mine is a PII 233 w/160 MB RAM, takes a 
while do to updates - then you really have to separate out what you are hosting 
from what you are using. Otherwise you end up in the situation that you have 
started one system update (or software install), have a build failure for 
whatever reason, and then can't complete the same one due to changes in the 
local copy of portage.

So, even if your system fell into the first situation - where it is fast enough 
- then I would still recommend doing the little extra to run as the second 
situation. It's just far easier to maintain. I'm actually surprised the Gentoo 
Mirror documentation doesn't recommend doing this to start with, but then again 
- the machine they recommend are magnitudes faster than what I'm running so 
it's 
not likely an issue. (Either that or everyone figures it out on their own and 
then just doesn't say anything.)

Why?

The local portage copy is always up-to-date, or reasonably so. No - I don't 
sync 
every 1/2 hour (like the official mirrors do), but I could force it to sync 
when 
I need to if that was an issue; typically once a day is sufficient and that's 
run by a cron job. But I also keep my server system relatively stable - I don't 
install a lot of software on it, and I don't necessarily update it frequently. 
So now I can update my laptop and desktop as well without having to first 
update 
the server itself since the rsync hosted portage is independent of the server.

Ben



Re: [gentoo-user] Re: Emerge Problems...

2011-02-01 Thread BRM




- Original Message 
 From: Dale rdalek1...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Sent: Tue, February 1, 2011 12:20:56 PM
 Subject: Re: [gentoo-user] Re: Emerge Problems...
 
 Neil Bothwick wrote:
  On Tue, 1 Feb 2011 05:48:32 -0800 (PST), BRM  wrote:
  
 
  If the machine is not fast  enough - mine is a PII 233 w/160 MB RAM,
  takes a while do to updates  - then you really have to separate out what
  you are hosting from  what you are using. Otherwise you end up in the
  situation that you  have started one system update (or software
  install), have a build  failure for whatever reason, and then can't
  complete the same one  due to changes in the local copy of portage.
   
  You can still use emerge -sync instead of a home brewed script. In  make
  conf, set SYNC to localhost, then in your cron job, do
  
  SYNC=some gentoo rsync mirror emerge --sync
  
  
  So, even if your system fell into the first situation -  where it is
  fast enough
  - then I would still recommend  doing the little extra to run as the
  second situation. It's just far  easier to maintain.
   
  I've been using a  single portage tree to serve a LAN and for use by the
  host for years  with no hint of any of the problems you suggest. I just
  make sure the  cron job on the server syncs earlier than the rest of the
  LAN and  everything is up to date.
  
 
 
 I used to have  four computers a good while back.  Back then, I synced my 
 main 
rig then  synced the others off it.  This was several years ago.  I don't use 
a  
cron job or anything to do this, just some old fashioned typing.  I don't  
recall ever having trouble with it syncing to my main rig.  Did I mention  it 
was a very old Compaq 200MHz CPU machine with a whopping 128MBs of ram?   
Thing 
looks like a filing cabinet.
 
 To me, it seems the OP is making  something complicated when it is just not 
needed.  If you want to use cron  jobs, set the main rig to sync a hour before 
the others would be set to sync  against it.  If the rig that syncs to Gentoo 
servers is to slow, set them  two hours apart.  From my understanding, you get 
the same tree all the way  around.
 
 Giving some more thought, I once put /usr/portage on nfs.  I  sync once and 
 all 
the systems used the same copy of the tree.  The other  way worked out to be 
easier tho.  I seem to recall the need for running  emerge --metadata too.  
That 
took a while on the old Compaq.   lol
 

And you're doing a typically manual process for updating all the systems - 
update your server first, then any rsync clients. Fine  dandy if that is your 
process - but it's not mine. I may update my laptop twice as often as the other 
two, especially if I want to play with some software or try something out, or 
fix a bug, or get a later version of KDE. The server gets updated may be once a 
month, while the laptop is either once a month or at whim when I want something 
that just came out.

It's not harder to do it this way, just a different method. The original rsync 
script worked perfectly fine; the broken update I did when I lost it is what 
started this whole thread.

As the old saying goes - Different Strokes for Different Folks.

Ben




Re: [gentoo-user] Emerge Problems...

2011-01-31 Thread BRM
- Original Message 

 From: Nils Holland n...@tisys.org
 On 20:12 Sat 29 Jan , BRM wrote:
  A little while back my  server ran out of hard disk space (due to a failed 
hard 

  drive) and as a  result my local portage mirror got destroyed.
  Well, I fixed there server  - initially by just grabbing a new copy of 
portage 

  like a new install  since it was just completely hosed, and the server is 
back up 

  and  working. However, now my desktop and laptop are both having problems. 
They 

  sync just fine against the server, but I get a series of errors about  not 
having 

  various ebuilds in the manifest files - so many that I can't  emerge 
  anything 

  (even portage).
 
 I believe you will already have  checked this, but anyways:
 
 I once upon a time experienced a similar  issue, which was caused by the fact 
that for some reason, I was only syncing new  / modfied files from the source 
to 
my local portage tree, and not deleting no  longer existent (on the source) 
files from the local tree. This resulted in  emerge complaining about various 
ebuilds not being found.
 
 I was kind of  shocked at first, then found my error, and on properly 
(including deletes)  syncing with my portage source everything immediately 
started working fine again  on the local (destination) machine.
 
 But again, I believe it's highly  unprobable that this is your problem, 
 because 
if you synced correctly before  your server had to be re-setup, I would 
believe that you're doing it correctly  now as well, at least I can't see what 
should have changed concering the sync  due to the act of replacing the  
server...
 

May be I didn't get the server back up right? Not sure.
Any how...the primary issue was resolved once I deleted the server's portage 
mirror and than ran rsync again to grab a fresh copy.
I'm pretty sure it would have to be how I rsync'd the mirror since I lost my 
mirroring script the old hard drive died.
I just wrote a new script last night, but I'm still not sure that all of the 
parameters are correct - I'll check into that more this evening.
Once I get it right, I'll restore it do doing the daily mirror syncs again.

Now I just have to get past all the issues coming up in the updates and 
rebuilds, but that was to be expected.

Thanks!

Ben




Re: [gentoo-user] Re: Emerge Problems...

2011-01-31 Thread BRM
- Original Message 

 From: Francesco Talamona francesco.talam...@know.eu
 On Monday 31 January 2011, BRM wrote:
  I just wrote a new script last  night, but I'm still not sure that all
  of the  parameters are  correct
 
 Why not something proven and reliable like emerge  --sync?
 

emerge --sync works fine for your _normal_ portage tree.
But if you are running a mirror on a gentoo system that also needs its own copy 
of portage, then you really need to have two portage trees on the system.
One portage tree is hosted by rsync for all - it can be synch'd at will with 
the 
official portage trees.
The second portage tree is the system's portage tree, and is only sync'd when 
you update it - just like any other gentoo system.

Why?

I originally ran the server with rsync hosting its portage tree, with daily 
synchronizations. However, when I forgot and let the server fall behind a 
little 
in updates, it became quickly clear that it needed its own separate copy of 
portage so I can install software without synchronizing portage - or rather, 
install software without having to update the whole system, etc.

Now, may be there are options for emerge --sync that I'm not aware of to 
handle just this case - but it works very well, and I ran it for quite a while. 
Sadly, I did not have that script backed up or anything; so I will have to 
recreate it again.

Ben




Re: [gentoo-user] Emerge Problems...

2011-01-30 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 To: gentoo-user@lists.gentoo.org
 Sent: Sun, January 30, 2011 7:03:27 AM
 Subject: Re: [gentoo-user] Emerge Problems...
 
 On Sat, 29 Jan 2011 20:12:26 -0800 (PST), BRM wrote:
 
  Well, I fixed  there server - initially by just grabbing a new copy of
  portage like a  new install since it was just completely hosed, and the
  server is back  up and working. However, now my desktop and laptop are
  both having  problems. They sync just fine against the server, but I get
  a series of  errors about not having various ebuilds in the manifest
  files - so many  that I can't emerge anything (even portage).
 
 Completely remove the  portage tree, fsck the filesystem and then resync.

Well, I certainly have to try that out.

 Of course, you can get  your other systems working by commenting out any
 SYNC entries in make.conf  and letting them sync directly with the Gentoo
 servers.

Can't edit the files on the laptop, possible on the desktop though.

 From: Francesco Talamona francesco.talam...@know.eu
 It seems your three systems  share a broken portage tree, try with the 
 latest portage snapshot, for  example from 
 http://distro.ibiblio.org/pub/linux/distributions/gentoo/snapshots/
 
 You  can also skip the sync and put it directly on the clients to see if 
 the  rsync service on server is broken... 
 
 Once you stabilize the root cause,  it's time to focus on the other 
 issues (for example run a non-X runlevel on  the laptop to fix the login 
 issue, use nano until vim is ok, and so  on).

I'm not a fan of nano, so I uninstalled it a long time ago. I usually use vim; 
not sure why vim is referencing perl libraries, but oh well.

And yes - fixing the portage issue is the first step. After that everything 
else 
will just fall out - since I can just run the various emerges and perl-cleaner.

Ben




[gentoo-user] Emerge Problems...

2011-01-29 Thread BRM
A little while back my server ran out of hard disk space (due to a failed hard 
drive) and as a result my local portage mirror got destroyed.
Well, I fixed there server - initially by just grabbing a new copy of portage 
like a new install since it was just completely hosed, and the server is back 
up 
and working. However, now my desktop and laptop are both having problems. They 
sync just fine against the server, but I get a series of errors about not 
having 
various ebuilds in the manifest files - so many that I can't emerge anything 
(even portage).

Right now, my laptop is basically hosed - KDE/X won't work on login due to some 
errors. My desktop at least logs in to the KDE/X. However, on both systems I am 
having the manifest problem, and I can't edit files either since vim is screwed 
up due to a change in perl - and I can't run perl-clearner due to the emerge 
problem.

I know both systems can be restored to being fully functional and up-to-date. 
The question is - how do I get there?

I ran across some emails in the list archive on a similar issue - though that 
was only for 1 ebuild - and it was straight forward enough to fix by just 
rebuilding the manifest though 'ebuild' or something. I ran across another 
e-mail suggesting to just resync, and well - I tried that but it didn't work.

So my question is - is there a way to automatically fix all these manifest 
things without having to track down each one by hand and run the 'ebuild' thing 
on each one individually? I'm completely out of ideas, and I'd really like to 
get these systems back to full functionality.

TIA,

Ben




Re: [gentoo-user] Microcode update AMD

2011-01-17 Thread BRM
- Original Message 

 I have two questions:
 
  1) Do I have to enable microcode  updates in the BIOS of my Crosshair
 IV Formula to activate  microcodes push in the CPU by the module
 microcode ? (AMD  Phenom X6 1090T)

Not sure about BIOS, but the Linux Kernel you are running will certainly need 
support enabled too.
 
  2) Does anyone know, what these microcodes do? They are  fixes for...
 ...what?

The Intel and AMD processors are more abstract than physical now. With i486 and 
earlier the processors were typically hard wired; hardware bug fixes could 
not 
be pushed out.
Intel's Pentium (and I don't know which AMD) started using micro-code to 
program 
the processor. This enabled them to push out hardware bug fixes for the 
processors.

So what happens is the x86 instruction (e.g. mov ax, bx) gets translated to 
micro-code first, then it gets processed, and the result translated back to the 
expected instruction result - essentially, emulating the x86 instruction set in 
the processor. That's the simple version.

So now when they discover a bug in the hardware they can push out a micro-code 
update to either fix the hardware  (microcode) bug or work around a hardware 
(physical hardware) bug.

Ben




[gentoo-user] Zenoss on Gentoo?

2011-01-05 Thread BRM
Is there a Gentoo Package for Zenoss Community edition 
(http://community.zenoss.org/community/download)?
It's available via sf.net (http://sourceforge.net/projects/zenoss/). If not, oh 
well.
I saw it recommend on another list for something I thought may be interested in 
trying out at home.

TIA,

Ben



Re: [gentoo-user] [OT] Windows 'Remote Assistance'

2010-11-22 Thread BRM
- Original Message 

 From: Mark Knecht markkne...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Sent: Sun, November 21, 2010 1:26:30 PM
 Subject: Re: [gentoo-user] [OT] Windows 'Remote Assistance'
 
 On Sun, Nov 21, 2010 at 9:29 AM, Mick michaelkintz...@gmail.com  wrote:
  I know this is somewhat off-topic but I am losing friends here by  telling 
them
  all the time that they should install Linux or at least  install and 
configure
  VNC if they need my help ...  plus some of them I  don't mind helping 
anyway.
 
  Lesser MSWindows versions do not seem  to allow connections via RDP and
  therefore rdesktop and krdc will not  connect to them without some VNC 
server
  running on the Windows  machine.
 
  Meanwhile Windows has this 'Remote Assistance'  function, which allows what 
it
  says and uses PRNP (Peer Name Resolution  Protocol).

AFAIK, PRNP is just a way to make the RDP sessions hook-up - it's usually 
integrated with
Windows Messenger IM client. If they have Remote Assistance available, then 
they also have the
ability to enable RDP - it's in the same location to enable, though I think 
only 
Remove Assistance is enabled by default.

Just have them go (via Classic Control Panel):

Start-Settings-Control Panel-System.

Then click on the Remote tab, which presents options for Remote Assistance 
and Remote Desktop.

I don't think RDP is available on Win2k without licenses - it's just Terminal 
Services. However, it is available
on WinXP and later; however, without the Terminal Services licenses you cannot 
be logged in at the same time.


$0.02

Ben




Re: [gentoo-user] Nepomuk indexing, what triggers it?

2010-11-19 Thread BRM
- Original Message 

 From: Paul Hartman paul.hartman+gen...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Sent: Fri, November 19, 2010 11:31:39 AM
 Subject: Re: [gentoo-user] Nepomuk indexing, what triggers it?
 
 On Fri, Nov 19, 2010 at 9:17 AM, Alan McKinnon alan.mckin...@gmail.com  
wrote:
  Hi all,
 
  Haven't had much luck finding this  info:
 
  If I reboot this machine and start KDE, Nepomuk starts a  rather long-lived
  index of my home directory. It takes up about 30-40%  cpu and lasts as much 
as
  15 minutes sometimes. This is annoying because  after a reboot I usually 
want
  to catch up on mail, rss feeds and fire up  VirtualBox. So nepomuk is just
  wasting my time at this point.
 
 My  /guess/ is that it scans every time you restart to be sure nothing
 changed  while it was shutdown. It doesn't know if you've dual-booted,
 logged into  xfce, mounted the disk in another machine, had fsck remove
 files,  etc.
 
 I think Tracker behaves the same way in gnome-land.

To add to it - Nepomuk has two parts (according to 
http://nepomuk.kde.org/node/2) that seem to be active in here:
1. Strigi - 
http://techbase.kde.org/Development/Tutorials/Metadata/Nepomuk/StrigiService
2. FileWatchService - 
http://techbase.kde.org/Development/Tutorials/Metadata/Nepomuk/FileWatchService

From the FileWatchService info:

However: due to the restrictions of all file watching systems  available 
(systems such as inotify are restricted to 8000 something  watches, fam does 
not 
support file moving monitoring, etc.) the service  mostly relies on KDirNotify. 
Thus, all operations performed by KDE  applications through KIO are monitored 
while all other operations (such  as console commands) are missed.

So it really does need to check up on things during restart to get back in 
sync, 
but also to find what it didn't know about from info not going through an 
interface it is aware of.

Ben




Re: [gentoo-user] KDE-4 multi-monitor + fullscreen applications

2010-11-18 Thread BRM
- Original Message 

 From: YoYo Siska y...@gl.ksp.sk
 To: gentoo-user@lists.gentoo.org
 Sent: Thu, November 18, 2010 12:41:03 PM
 Subject: Re: [gentoo-user] KDE-4 multi-monitor + fullscreen applications
 
 On Wed, Nov 17, 2010 at 11:41:25PM +0100, Florian Philipp wrote:
  Am  17.11.2010 23:26, schrieb Alan McKinnon:
   Apparently, though  unproven, at 00:08 on Thursday 18 November 2010, 
Florian 

   Philipp  did opine thusly:
   
   Hi list!
   
   Today, KDE nearly killed a presentation I held and now  I want to
   understand what's going on:
  
Following setup: One laptop, two outputs (internal display +  
projector).
  
   Now I configure KDE to expand the  desktop on both (instead of simple
   cloning). So far, so  good.
   
   For anyone to help at all, we'll need to know  your hardware and video 
drivers, 

   plus versions in use of X.org and  it's drivers, plus relevant config 
stuff.
   
   Everything  else is highly configurable and subject to the whim of driver 
writers and the user. And there's always nVidia's stance to be taken 
   into 

   account as well
   
  
  Ah, right, forgot  about that. Intel GMA HD graphics (i915 driver),
   x11-base/xorg-server-1.8.2 (USE=udev -hal) and  x11-base/xorg-drivers-1.8
  
  No xorg.conf. Tried it with composite  effects off and on.
  
  KDE is on version 4.4.5 and some packages  4.4.7 (current stable).
  
  
   First  question: How does KDE choose on which output the standard
desktop ends up and which gets the second set of desktop background +
plasma widgets? It seems like the one with the higher resolution  is
   standard and on a draw, it is the right-most. Is that  correct? Can it be
   configured?
  
Now that I have both desktops, I open Acroread or Okular and start  the
   fullscreen/presentation mode. What happens is that the  presentation is
   deterministically opened on one of the  displays. What I don't understand
   is how it chooses which one  it uses?
 xrandr 1.3 has a new option to say which output should be  'primary'
 
 you can try something like
 
 xrandr --output LVDS1 --mode  1024x768 --pos 0x0 --primary --output VGA1
 --mode 1024x768 --righ-of  LVDS1
 
 However IIRC kde used to ignore which display was primary (reported  as
 xinerama screen 0) and somehow decided on its own order...
 
 Here  okular works correclty (well, at least Current screen and Screen
 XX used  to work, don't remember for Default screen and can't test
 right now...),  but right now I'm using fluxbox as window manager and not
 kwin, but I it  would be weird if it actually made things break.. ;)
 


snip
You may be interested in this post by Aaron Seigo:

http://aseigo.blogspot.com/2010/11/multihead-plasma-desktop-needs-you.html

Also, check the KDE System Settings as I mentioned in my first post on this 
thread.
I don't have an Xorg config file, and yet I can do multiple displays through 
KDE;
I also don't configure xRandr. This is all independent of any video card; 
though,
as A. Seigo points out there are some issues still being fixed.

Not sure if you're trying to run multi-head or simply multi-screen; though I 
think
multi-screen since you don't have an xorg.conf file too. Please read Aaron's 
post.

Ben



Re: [gentoo-user] KDE-4 multi-monitor + fullscreen applications

2010-11-17 Thread BRM
- Original Message 

 From: Florian Philipp li...@f_philipp.fastmail.net
 To: Gentoo User List gentoo-user@lists.gentoo.org
 Sent: Wed, November 17, 2010 5:08:33 PM
 Subject: [gentoo-user] KDE-4 multi-monitor + fullscreen applications
 
 Hi list!
 
 Today, KDE nearly killed a presentation I held and now I want  to
 understand what's going on:
 
 Following setup: One laptop, two  outputs (internal display + projector).
 
 Now I configure KDE to expand the  desktop on both (instead of simple
 cloning). So far, so good.
 
 First  question: How does KDE choose on which output the standard
 desktop ends up  and which gets the second set of desktop background +
 plasma widgets? It  seems like the one with the higher resolution is
 standard and on a draw, it  is the right-most. Is that correct? Can it be
 configured?

I haven't played with the KDE4 mult-monitor mode enough yet; but I would think
it would be in the Display settings section of the System Settings for KDE4.
 
Reading over:
http://forum.kde.org/viewtopic.php?f=66t=82510
http://forum.kde.org/viewtopic.php?f=66t=25765

It seems Kephal is the culprit. Quite a bit was fixed for 4.2, and even more 
for 
4.3, 4.4, and 4.5.
So you may want to see if it's a bug related to something pre-4.5.
Looks like 4.5 is in testing:

http://gentoo-portage.com/kde-base/kde-meta

Just a thought; wish I could be more helpful.

Ben




Re: [gentoo-user] Re: [Waaay OT] Defrag tool for windoze

2010-11-16 Thread BRM
- Original Message 

 From: Alan McKinnon alan.mckin...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Cc: Grant Edwards grant.b.edwa...@gmail.com
 Apparently, though unproven, at 17:34 on Tuesday 16 November 2010, Grant 
 Edwards did opine thusly:
  On 2010-11-16, J. Roeleveld jo...@antarean.org wrote:
spinrite claims to make the head do other things than what the  drive
   firmware makes it do.
  
  I'm afraid I'll  have to call bullshit on that.  I don't see how some
  bit of PC  software can make a drive head move.  The firmware on the
  drive  controller board is the only thing that can make the head move.
  Does  spinrite claim they _replace_ the drive firmware with their own
  custom  version?
 
 Firmware is nothing more than high-level software that wraps  low-level 
 commands on the drive. High and low are to be taken here within  the context 
 of 

 a drive and it's controls, so don't be thinking it's on the  same level as 
 fopen()
 
 
 SOMETHING makes the head move. That  something is the servos, and they are 
 under software control (how could it  be otherwise?) If the registers and 
 commands that control that can be  exposed, fine control is possible. The 
 firmware does not itself define the  only things the head can do, in the same 
 way that a file system does nto  define the only things that can be written 
 to 

 a disk

While I am no hard drive expert - I would suppose that only the firmware would 
have
access to the registers and commands that actually control the internals of the
hard drive; though it could be possible to utilize some lesser published 
functionality
in the firmware, I would find it hard to believe that they would allow the 
internals
of the hard drive to be controlled by anything other then their own software 
(e.g. the firmware).

The primary responsibility of the firmware is to act as the control software and
present the software interfaces that are desired - e.g. support the commands 
recieved
via the hardware bus interface (e.g. PATA, SATA, etc.).

There are probably some extra functions there for diagnostic purposes, but they 
are likely
to be things only known by the manufacturer, things you could only expect 
software from
the manufacturer to support or even possibly be aware of. In such case you 
wouldn't be
bypassing the firmware - just using it in a slightly different, unpublished, 
manufacturer-only
mode - user beware - e.g. firmware update.

Thus I'd have to agree with the BS-call.
 
Again, I am no hard drive expert.

$0.02

Ben




Re: [gentoo-user] [Waaay OT] Defrag tool for windoze

2010-11-15 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
 I have a niece that brought me her puter.  It's a HP with  windoze XP on it.  
 I 
want to defrag the hard drive but the one that comes  with windoze won't work. 
 
Is there a free defrag tool that is safe on  windoze?  I ask because I don't 
want to install something and not know what  I am installing.  You know, some 
program with a nasty virus attached or  something.
 
 I did Google and found a lot of tools but I'm not sure which  one to trust.  
 If 
someone here has used one before and trusts the one they  used, I would be 
happy 
to hear about  it.

Well, the build-in version of the Defrag program is a really a shill. But 
that's 
mostly b/c Microsoft is licensing a stripped down version of a really good 
piece 
of software called Disk Keeper (http://www.diskeeper.com/). While it's not 
free, open source, etc. the price is worth it to keep a Windows system in check 
and running smoothly.

Ben




Re: [gentoo-user] [Waaay OT] Defrag tool for windoze

2010-11-15 Thread BRM
- Original Message 

 From: Mike Edenfield kut...@kutulu.org
 On 11/15/2010 11:05 AM, Florian Philipp wrote:
  It's LGPL licensed.  The GUI is a bit ugly but it has a lot of
  functionality and can handle  cases in which the Windows defragger
  doesn't work. That mostly happens  when the disk is nearly full.
 
 Since we're *way* off topic as it  is:
 
 mydefrag isn't LGPL, just freeware, but I did notice this on  the
 jkdefrag site:
 
 The executables are released under the GNU General  Public License, and
 the sources are released under the GNU Lesser General  Public License.
 
 Is that even possible?


I doubt it, but you'd have to ask FSF to know for sure.

(IANAL)

Ben




Re: [gentoo-user] bash scripting tip

2010-11-12 Thread BRM
- Original Message 

 From: Hilco Wijbenga hilco.wijbe...@gmail.com
 On 12 November 2010 10:36, Hilco Wijbenga hilco.wijbe...@gmail.com  wrote:
  On 12 November 2010 09:57, Philip Webb purs...@ca.inter.net  wrote:
  It needs to be a Bash function, so in  ~/.bashrc
   I tried 'function cd2() { cd .. ; cd $1 ; }',
 
   Doesn't
 
  function cd2() { cd ../$1 }
 
  work? (I  haven't tried it.)
 
 So yes, this:
 
 function cd2() { cd ../$1;  }
 
 works.
 
Something I have found useful is the pushd/popd functions in Bash.
Of course, to use them the way you want to you'd have to use two step procedure:

1. Init to the directory you want:

function cdInit()
{
pushd ${1}  /dev/null
pushd ${2}  /dev/null
}

2. cd away:

function cd2()
{
popd  /dev/null
pushd ${1}  /dev/null
}

3. close out when you're done:

function cdFini()
{
popd
}

You could probably modify the above do pull out the initial directory from a 
single string by - e.g. turn /my/path/parent/child into /my/path/parent - as 
well.
You could also process the DIRSTACK variable (or use the 'dirs' command) to see 
if the parent directory is already on the stack too.

Note: I have the redirs in there because pushd/popd by default dumps the 
DIRSTACK as its output.

$0.02

Ben




Re: [gentoo-user] Converting RCS/CVS to git

2010-11-02 Thread BRM
The cvs2svn project also has a cvs2git tool.

http://cvs2svn.tigris.org/

HTH,

Ben




- Original Message 
 From: fe...@crowfix.com fe...@crowfix.com
 To: gentoo-user@lists.gentoo.org
 Sent: Tue, November 2, 2010 12:02:58 AM
 Subject: [gentoo-user] Converting RCS/CVS to git
 
 I have a small RCS repository which I would like to convert to git.
 It has no  branches, no subdirs, and only a few files.
 
 I found one conversion  utility which claimed to convert directly from
 RCS to git, but it failed, and  I no longer remember its name or how it
 failed, other than it sounded like  more than  a simple failure.
 
 I can convert it to CVS manually simply  enough.
 
 I found git has a cvsimport command, but it complained that cvs  didn't
 recognize the server command, and some hints I saw of requiring cvs  2
 made me pause ... all I can see is cvs 1.12.  Vague fuzzy old  memories
 make me think there was a cvs 2, but I see nothing in gentoo for  it.
 I am not  excited at git expecting a cvs server; I'll be danged if  I'm
 going to muck around with that just to convert a few files when  git
 has direct access to the ,v files themselves.
 
 Anyone have any  suggestions?  Don't feed me google pages; I am asking
 for personal  experience.  It would also be interesting to know what
 this cvs 2  business is.  It's hard to google for that ...
 
 -- 
  ... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ ..  ._.
  Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
   GPG = E987 4493  C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
 I've found a  solution to Fermat's Last Theorem but I see I've run out of 
 room 
o
 
 



Re: [gentoo-user] Converting RCS/CVS to git

2010-11-02 Thread BRM
- Original Message 

 From: fe...@crowfix.com fe...@crowfix.com
 On Tue, Nov 02, 2010 at 08:41:27AM -0700, BRM wrote:
  The cvs2svn project  also has a cvs2git tool.
  
   http://cvs2svn.tigris.org/
 
 Interesting ... downloaded and tried it, but  no time for a full
 reading of the docs ... got an empty git repository so I  will have to
 explore it further later :-)
 

Jump on the cvs2svn mailing list if you continue to have problems.
The mailing list is very low-volume (100 per month) and the author is quite 
responsive to issues.

I haven't used cvs2git myself, though I have used cvs2svn several times.
It's a great and wonderful little tool.

Ben




Re: [gentoo-user] baselayout -- openrc ?

2010-10-25 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 On Mon, 25 Oct 2010 00:29:30 +0200, Alan McKinnon wrote:
Although, perhaps I'm missing something but doesn't alpha
   come  *before* release candidate?  :)  
  Yes,  but:
  
  2.2.0_alpha1 comes *after* 2.2_rc99
 
 It should also  come after 2.2, but I appear to have missed that release.

Why? 2.2 == 2.2.0

So 2.2.0_alpha1 would make a logical progression.

Ben




Re: [gentoo-user] LibreOffice

2010-10-15 Thread BRM
- Original Message 

 From: Volker Armin Hemmann volkerar...@googlemail.com
 b) 'libreoffice' - which is  competing for the title 'most idiotic name ever' 
 - 

 is based on  go-openoffice. Gentoo already uses  the go-openoffice patches.

Based on what I read on the Document Foundation's website, I do not believe it 
is based on Go-OOo at all; but that they just accepted the patches in 
mainline.
From their FAQ (http://www.documentfoundation.org/faq/) - emphasis mine:


Q: What does this announcement mean to other derivatives of OpenOffice.org? 
A: We want The Document Foundation to be open to code contributions  from as 
many people as possible. We are delighted to announce that
the  enhancements produced by the Go-OOo team will bemergedinto  LibreOffice, 
effective immediately. We hope that others will follow  suit. 


Ben




Re: [gentoo-user] IP aliasing problem

2010-10-07 Thread BRM
 ServerName differently for  each VirtualHost.  Strangely though, I

 still don't get stats for RX/TX  from ifconfig:
 
 eth0  Link encap:Ethernet  HWaddr  [removed]
   inet addr:1.2.3.1   Bcast:[removed]  Mask:255.255.255.248
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:923677 errors:0 dropped:0 overruns:0  frame:0
   TX packets:1444212 errors:0  dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
   RX  bytes:124904402 (119.1 MiB)  TX bytes:1880087116 (1.7 GiB)
Interrupt:40
 
 eth0:1Link  encap:Ethernet  HWaddr [removed]
   inet  addr:1.2.3.2  Bcast:[removed]  Mask:255.255.255.248
UP BROADCAST RUNNING MULTICAST  MTU:1500   Metric:1
   Interrupt:40

Remember eth0:1 is an alias for eth0.

Your above info is slightly wrong in that eth0 should be listed as eth0:0; 
where 
instead ifconfig is showing eth0 generic information and eth0:0 information 
combined.
That's probably the source of your confusion.

Don't know how to remedy it though.

HTH,

Ben




Re: [gentoo-user] Copying a file via ssh with no password, keeping the system safe

2010-10-07 Thread BRM
- Original Message 

 From: cov...@ccs.covici.com cov...@ccs.covici.com
 To: gentoo-user@lists.gentoo.org
 Sent: Thu, October 7, 2010 6:21:15 PM
 Subject: Re: [gentoo-user] Copying a file via ssh with no password, keeping 
 the 
system safe
 
 Momesso Andrea momesso.and...@gmail.com  wrote:
 
  
  Quoting Andrea Conti a...@alyf.net:
  
   On  07/10/2010 18:45, Momesso Andrea wrote:
  
   Setting up  a public key, would do the job, but then, all the connections
between the servers would be passwordless, so if server A gets
compromised, also server B is screwed.
  
   Well, not  really... public key authentication works on a per-user basis,
   so  all you get is that some user with a specific key can log in as some
other user of B without typing a password.
  
   Of  course, if you authorize a given key for logging in as r...@b, then
what you said is true. But that is a problem with the specific setup.
   
   Is there a way to allow only one single command from a  single cronjob to
   operate passwordless, while keeping all the  other connections secured by
   a password?
  
You can't do that on a per-command basis. You'd be trying to control  the
   authentication method accepted by sshd on B according to which  command
   is run on A -- something sshd on B knows nothing  about.
  
   I would try the following way:
   
   - Set up an unprivileged user on B -- let's call it foo --  which can
   only write to its own home directory, /home/foo.
   
   - add the public key you will be using (*) to f...@b's  authorized_keys
   file. You should set the key's options to
'pattern=address_of_A,no-pty,command=/usr/bin/scp -t --  /home/foo'
   (man sshd for details).
  
   -  chattr +i /home/foo/.ssh/authorized_keys, so that the file can only be
changed by a superuser (you can't just chown the file to root as sshd  is
   quite anal about the permissions of the authorized_keys  file)
  
   Now your cron job on A can do scp file  f...@b:/home/foo without the
   need for entering a password; you just  have to set up another cron job
   on B that picks up the file from  /home/foo and puts it where it should
   go with the correct  permissions, possibly after doing a sanity check on
   its  contents.
  
   If you use something else than scp, (e.g.  rsync) you should also adjust
   the command option in the key options  above.
   Note that the option refers to what is run on B, not on A.  Also, it is
   *not* an authorization directive à la /etc/sudoers  (i.e., it does not
   specify what commands the user is allowed to  run): it simply overwrites
   whichever command is requested by the  client side of the ssh connection,
   so that, for example, the client  cannot request a shell or do cat
   somefile.
   
   (*) You can either use the key of the user running the cron  job on A, or
   generate a separate key which is only used for the  copy operation. In
   this case, you will need to tell scp the  location of the private key
   file with the -i option.
   
   HTH,
   andrea
  
  
  
  Thank you all for your fast replies, I think I'll use all of your  
suggestions:
  
  -create an unprivilegied user with no shell access  as Stroller and
  Andrea suggested
  
  -I'll setup a  passwordless key for this user, only limited to a single
  command, as  Willie
  suggested
  
  This sounds pretty sane to me.
 I  think for ssh to work the user needs a valid shell, not nologin, so
 you can't  do both of those suggestions.]

Wouldn't a shell-less account per just provide the ability to use SFTP/SCP?
Those don't require a shell to operate.

You only need a shell if you are going to actually login as a user and do 
something other than a file transfer.

Also, ssh can be run in multiple modes - some of which do not require a shell; 
for example:

ssh someu...@myhost.com /bin/false

will run the command /bin/false without initiating a shell. (man ssh for 
details).

$0.02

Ben




[gentoo-user] Android SDK

2010-10-07 Thread BRM
I noticed there have been a few Android SDK's in portage now for a while - 
originally android-sdk, now android-sdk-update-manager 
(http://packages.gentoo.org/package/dev-util/android-sdk-update-manager).
I know there is a bug on it - http://bugs.gentoo.org/show_bug.cgi?id=320407 - 
but that's the only one I can find.

Anyone have an idea on when some version might go stable?
There's 7 versions in portage and all are testing; the first (version 3) goes 
back to November 2009.

TIA,

Ben




Re: [gentoo-user] IP aliasing problem

2010-10-06 Thread BRM
- Original Message 

 Thank you for taking the time to write Stroller.  This has  really got
 my head spinning.  First of all, you're right about the  netmask.  It
 is 255.255.255.248.  I didn't have a good  understanding of what a
 netmask is so I thought it would be smart to change  it for a public
 message.
 
 The server is remote and hosted so I don't  have any control over the
 router or network.  I've gone back and forth  with the host but they
 insist that everything is fine on their  end.
 
 I'm confused because I have in apache2  config:
 
 VirtualHost 1.2.3.1:443
 ...
 SSLCertificateFile  /etc/apache2/ssl/www.example1.com.crt
 SSLCertificateKeyFile  /etc/apache2/ssl/www.example1.com.key
 ...
 /VirtualHost
 VirtualHost  1.2.3.2:443
 ...
 SSLCertificateFile  /etc/apache2/ssl/www.example2.com.crt
 SSLCertificateKeyFile  /etc/apache2/ssl/www.example2.com.key
 ...
 /VirtualHost
 
 But  if I request https://1.2.3.2 or https://1.2.3.2:443, I'm  presented
 with an SSL cert that has www.example1.com for the Common Name.   I
 used openssl to verify that the Common Name for www.example2.com.crt
 is www.example2.com.
 

I would suggest setting up separate access and error logs for each virtual host 
so you can see who is actually getting the connection, and then going from 
there.
That will probably point out your real problem.

Ben




Re: [gentoo-user] dev-util/autotoolset

2010-10-05 Thread BRM
- Original Message 

 From: Alan McKinnon alan.mckin...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Cc: dhk dhk...@optonline.net
 Sent: Tue, October 5, 2010 7:34:02 AM
 Subject: Re: [gentoo-user] dev-util/autotoolset
 
 Apparently, though unproven, at 12:33 on Tuesday 05 October 2010, dhk did 
 opine thusly:
 
  On 10/05/2010 05:28 AM, Alan McKinnon  wrote:
   Apparently, though unproven, at 10:25 on Tuesday 05 October  2010, dhk did
   
   opine thusly:
   What  should I do about dev-util/autotoolset?  I use it every day for a
project, but today it looks like I'm being told to remove it.   What is
   the alternative?
   
   In  my package.keywords file I have dev-util/autotoolset ~amd64
unmasked since I use autotools for a project.
   
After the output of emerge -uDNp world this morning the following
message is displayed.
   
   !!! The  following installed packages are masked:
   -  dev-util/autotoolset-0.11.4-r1 (masked by: package.mask)
/usr/portage/profiles/package.mask:
   # Diego E. Pettenò flamee...@gentoo.org (04 Oct  2010)
   #  on behalf of QA team
   #
# Ironically, it is misusing autotools (bug #255831). It was
# added in 2004 and never version bumped since; autotools
# have since evolved a fair amount, while this is based
# still on automake 1.6. Avoid keeping it around.
#
   # Removal on 2010-12-03
   
For more information, see the MASKED PACKAGES section in the emerge
man page or refer to the Gentoo Handbook.
   
As stated above, what should I Avoid keeping it around?  Is  it
   autotoolset, automake 1.6, ...?  Also, what's getting  removed on
   2010-12-03?  If autotoolset is removed, what  should be used to build
   projects?
   
The text applies to autotoolset and the reason it is being removed. It  
is
   autotoolset that is not being kept around anymore.
   
   It's actually quite obvious once you calm down, get over your  fright, and
   read the message.
   
   You do not  need autotoolset to build projects. You need autotools which
   is not  the same thing.
   
   Even though I have autotoolset  installed, search shows it as being
   masked.
   
   # emerge --search autotoolset
Searching...
   [ Results for search key : autotoolset ]
[ Applications found : 1 ]
   
   *   dev-util/autotoolset [ Masked ]
   
  Latest version available: 0.11.4-r1
  Latest version installed: 0.11.4-r1
 Size  of files: 1,133 kB
 Homepage:   http://autotoolset.sourceforge.net/
  Description:   colection of small tools to simplify project

   development with autotools
   
  License:   GPL-2

   What to do now?
   
   Why are  you worried? You use autotools not autotoolset. Let the thing be
removed, After 6 years of no updates you shouldn't be using it anyway.
  
  So are you saying if I remove autotoolset that I'll still have  autoconf,
  automake, and the rest; and everything will work the  same?  I thought
  all the autotools were in autotoolset.  I  guess I don't know the
  difference between autotools and autotoolset and  what they are made up of.
 
 
 autotools != autotoolset
 
 The  description from eix that you yourself posted tells you as much.
 
 Run  equery files autotoolset and see what is in the package. Decide for 
 yourself if you want to keep it and if so move the ebuild to your local 
 overlay where you can maintain it for yourself.

Reading over the website it seems almost as if it is a fork of GNU autotools.

http://autotoolset.sourceforge.net/

But to the original question - if they do not remain installed then you can 
always install them individually as well.

Ben




Re: [gentoo-user] Gentoos community communication rant

2010-09-07 Thread BRM
- Original Message 

 From: Al oss.el...@googlemail.com
 To: gentoo-user@lists.gentoo.org
 2010/9/7 Volker Armin Hemmann volkerar...@googlemail.com:
   On Tuesday 07 September 2010, Al wrote:
   because he hopes that  you finally shut up?
  Why do you read this thread and  answer to it? Ignore it.
  I would, if you wouldn't put out a  large percentage of emails arriving at 
my
   inbox.
 Execellent. You give the best example why mailing lists  influence
 communication in a negative way.
 You have difficulties to  let some people stay in their own thread,
 beause it all goes through your  inbox. You call that advanced? I
 don't.

Doesn't have anything to do with the communications medium. Email or NNTP - 
this 
thread has become a rant by you for no other purpose than you own agenda.

You're way off topic for this list, and the community has already responded to 
you multiple times.
They've even pointed out how to get what you want through existing systems.

Please listen to the community and heed their advice on this one.
Otherwise you may just find yourself de-subscribed, blocked, or blacklisted 
(likely be individuals) and you won't ever get the advice you want/need.

Ben




Re: [gentoo-user] Yahoo and strange traffic.

2010-08-25 Thread BRM
- Original Message 

 Joshua Murphy wrote:
  Well, glancing at the GET request it's making  there, as well as the
  API google points me to when I look it  up...
 
   http://developer.yahoo.com/messenger/guide/ch03s02.html#d4e4628
 
   You're right that it's after an image from their profile, but the
  cause  of the failure appears to be related to some sort of credentials
  Yahoo  wants the messenger to provide. You might poke Kopete's
  bugtracker to  see if they've a related bug on file already, and if
  they don't, throw  one their way.
 
  The API Yahoo appears to be using there (based on  a response I got
  back in poking lightly) is, or is based on, OAuth,  which according to
  this:
 
   http://oauth.net/core/1.0/#http_codes
 
  specifies that a request  should give a 401 response (Authorization
  Required vs Unauthorized is  purely the choice of phrase used in the
  program decoding the numerical  code, i.e. wireshark in your example of
  it there) in the following  cases:
 
  HTTP 401 Unauthorized
 * Invalid  Consumer Key
 * Invalid / expired Token
  * Invalid signature
 * Invalid / used nonce
 
   Yahoo, essentially, *does* give a bugger off!! with that response,
  but  Kopete simply takes it, considers it a brief instant, then decides
   Maybe the answer will change if I try again *now*!... at which point
   it proceeds to introduce its proverbial cranium to the proverbial
  brick  and mortar vertical surface one might term the wall.
   Repeatedly.
 
 
 
 I was sort of figuring that it  was trying to get something and Yahoo 
 wasn't liking it.  At least now  we know for sure.
 
 I went to bug.kde and searched but I didn't see  anything.  Of course, 
 I'm not really sure what the heck to look for  since I don't know what is 
 failing, other than  Kopete.

Best bet would probably be to check with the Kopete devs on IRC or mailing list 
(kopete-devel).

Ben




Re: [gentoo-user] Yahoo and strange traffic.

2010-08-17 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
 Adam Carter wrote:
  Is this easy to do?  I  have no idea where to start except that
  wireshark is  installed.
  Yep, start the capture with Capture -  Interfaces and click on the start 
button next to the correct interface, then  right click on one of the packets 
that is to the yahoo box and choose Decode As  set the port and protocol then 
apply. You'll 

 need to understand the semantics of  HTTP for it to be of much use tho.
 You had me until the last part.   No semantics here.  lol   May see if I can 
post a little and see if  anyone can figure out what the heck it is doing.  
I'm 
thinking some crazy  bug or something.  Maybe checking for updates not 
realizing 
it's 

 Kopete  instead of a Yahoo program.

Wireshark will show you the raw packet data, and decode only a little of it - 
enough to identify the general protocol, senders, etc.
So to understand the packet, you will need to understand the application layer 
protocol - in this case HTTP - yourself as Wireshark won't help you there.

But yet, Wireshark, nmap, and nessus security scanner are the tools, less so 
nessus as it really is more of a port scanner/security hole finder than a debug 
tool for applications (it's basically an interface for nmap for those purposes).

HTH,

Ben




Re: [gentoo-user] Yahoo and strange traffic.

2010-08-17 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
 Mick wrote:
  On Tuesday 17 August 2010 21:15:51 Dale wrote:
  Mick wrote:
   On 17 August 2010 15:29, BRMbm_witn...@yahoo.comwrote:
  -  Original Message 
  From: Dalerdalek1...@gmail.com
   Adam Carter wrote:
Is this easy to  do?  I  have no idea where to start except  that
wireshark is   installed.
  Yep, start  the capture with Capture - Interfaces and click on  
the
  start
  button next to the  correct interface, then  right click on one of the
   packets that is to the yahoo box and choose Decode As  set the  port
  and protocol then apply.  You'll
  need to understand the  semantics of  HTTP for it to be of much use tho.
   You had me until the last part.   No semantics here.  lol   May  see if
  I can post a little and see if  anyone can  figure out what the heck it
  is doing.  I'm thinking  some crazy  bug or something.  Maybe checking
   for updates not realizing it's
   Kopete  instead of a Yahoo program.
  Wireshark will show you the raw  packet data, and decode only a little of
  it - enough to  identify the general protocol, senders, etc.
  So to  understand the packet, you will need to understand the  
application
  layer protocol - in this case HTTP - yourself as  Wireshark won't help
  you  there.
  But yet, Wireshark, nmap, and  nessus security scanner are the tools,
  less so nessus as it  really is more of a port scanner/security hole
  finder than a  debug tool for applications (it's basically an interface
  for  nmap for those purposes).
  I'm not at home to experiment and I don't use yahoo, but port  5050 is
  typically used for mmcc = multi media conference control  - does yahoo
  offer such a service?  It could be a SIP  server running there for VoIP
  between Yahoo registered users or  something similar.
  The http connection could be  offered as an alternative proxy
  connection to the yahoo IM  servers for users who are behind
  restrictive firewalls.   Have you asked as much in the Yahoo user
   groups?
  The fact that the threads continue after  kopete has shut down is not
  necessarily of concern as was  already explained, unless it carries on
  and on for a long time  and the flow of packets continues.  I don't
  know how yahoo  VoIP works.  Did you install some plugin specific for
  yahoo  services?  If it imitates the Skype architecture then it
   essentially runs proxies on clients' machines and this could be  an
  explanation for the traffic.
  I don't have VoIP, Skype or that sort of thing  here.  Here is my Kopete
  info tho:
   [ebuild   R   ] kde-base/kopete-4.4.5-r1  USE=addbookmarks  autoreplace
  contactnotes groupwise handbook highlight history  nowlistening pipes
  privacy ssl statistics texteffect translator  urlpicpreview yahoo
  zeroconf (-aqua) -debug -gadu -jabber -jingle  (-kdeenablefinal)
  (-kdeprefix) -latex -meanwhile -msn -oscar -otr  -qq -skype -sms -testbed
  -v4l2 -webpresence -winpopup 0  kB
  Anything there that cold cause a  problem?
  No, I can't see anything  suspicious, you don't even have skype or v4l2
  enabled, so it is unlikely  that it is running some webcam stream (as part 
of
  VoIP).
 I'm thinking it is Yahoo wanting to upgrade something but not  realizing 
 that I'm not using their client but using kopete.  Yahoo  isn't the 
 sharpest tool in the shed you know?

I doubt that's the case. I use Pidgin with Yahoo, and haven't had that kind of 
thing so far as I'm aware.

Ben




Re: [gentoo-user] Yahoo and strange traffic.

2010-08-15 Thread BRM
- Original Message 

 On Sun, Aug 15, 2010 at 3:34 PM, Dale rdalek1...@gmail.com wrote:
   Hi folks,
  I been noticing the past few weeks that something is  communicating with
  Yahoo at these addresses:
 
  cs210p2.msg.sp1.yahoo.com
 
  rdis.msg.vip.sp1.yahoo.com
 
   I thought it was Kopete getting some info, profile pics maybe, from the
   server.  Thing is, it does this for a really long time.  It is also  
SENDING
  data as well.  I have no idea why it is doing this or what  it is sending.  
I
  closed the Kopete app but the data still carries  on.   This transfer has
 I think it's  normal.
 
 The first address is one of their pool of messaging servers and  the
 second is a web server, probably like you said for  retrieving
 additional info. The sending of data could be the http request,  or
 updating your status/picture/whatever kopete may be doing. You  could
 try blocking it and see what breaks. :)

Likely true as Yahoo!'a interfaces are highly AJAX driven - with their own PHP 
oriented widget kit as well.
So if you have a web page open to any Yahoo! site that is probably what is 
doing 
it.

Ben




[gentoo-user] b43-legacy and newer linux kernels?

2010-08-13 Thread BRM
I have a laptop that has been running Linux Kernel 2.6.30 Gentoo-R8 (gentoo 
sources, don't remember which version) for a while. It has a Broadcom 4306 Rev 
2 
wireless card that has been working well with that kernel. I extracted the 
firmware from the broadcom-wl-4.150.10.5 blob a while ago using b43-fwcutter 
011. I have to hard-code the network settings in /etc/conf.d/net for my home 
network, but am able to use wpa_supplicant whenever I go elsewhere. (I think 
it's my home wireless router that causes the issue; probably needs a firmware 
upgrade.)

Any how, I recently upgraded to Linux Kernel 2.6.34 Gentoo-R7 (gentoo-sources 
2.6.34-r1); again using the b43-legacy driver for the wireless. However, now I 
can't keep a network connection up. I keep getting errors from the 
/etc/init.d/net.wlan0 startup - namely: SIOCSIFFLAGS Unknown Error 132. I had 
to 
reboot onto the older kernel to write this message and try to research the 
issue 
a little.

From on-line, some sites suggest the following as a solution:

rmmod ath9k
rfkill block all
rfkill unblock all
modprobe ath9k
rfkill unblock all
however, rfkill seems to only be in testing for gentoo 
(http://packages.gentoo.org/package/net-wireless/rfkill), and I'm using the 
b43-legacy instead of the ath9k driver - okay, no problem there, just switch 
out 
which driver is unloaded and reloaded. Haven't tried it yet as I have to 
reboot; 
but even so - they are saying this has to be done on every reboot, and that's 
not much of a solution.

Further, I can't seem to find a version of b43-fwcutter that will extract any 
of 
the b43-legacy firmware - even the one I had successfully extracted (011, 012, 
13).

Has anyone else seen this? Does anyone know if this gets resolved (or made 
worse) by a newer kernel?

Ben




Re: [gentoo-user] b43-legacy and newer linux kernels?

2010-08-13 Thread BRM
- Original Message 

 On 13 August 2010 09:08, Neil Bothwick n...@digimed.co.uk wrote:
  On  Thu, 12 Aug 2010 22:10:02 -0700 (PDT), BRM wrote:
  but even  so - they are saying this has to be done on every reboot, and
  that's  not much of a solution.
  Put the commands in  /etc/conf.d/local.start, or the start section
  of /etc/conf.d/local if  using baselayout2.
 Have you been through the guidance in this page to  find out which
 kernel driver you ought to use with your  card?
 http://linuxwireless.org/en/users/Drivers/b43

Yes. Unfortunately it's a 14e4:4320/ with BCM4306/2 Chip set (4306 Rev 2), so 
it 
requires the b43-legacy driver, and only firmware version FW10 supports the 
hardware from what I can tell.

It just seems to me that I went from a working wireless on 2.6.30 to a 
non-working wireless on 2.6.34. I'd really like to get back to a working
wireless card, and be on the newer kernel.

While the steps I quoted may be a work around for 2.6.34 - I haven't had a 
chance to test them yet, hopefully tonight - they are just that, a work around 
for a bug.
rfkill did install pretty easily once I unmasked it, but I don't know if it 
will 
work yet either.

Ben




Re: [gentoo-user] Re: State of Radeon drivers

2010-07-28 Thread BRM
I was updating my AMD64 system last night - which has an nVidia card and uses 
the nVidia binary stack - and ran into problems. jasper won't compile with 
nVidia's provide opengl implementation. But bug report[1] notes suggest the 
problem is in nVidia's binary layer and all the crap the replace. I had to 
switch it over to the standard X11 opengl to compile it. I'll switch it back 
later, but there are serious problems with the nVidia binary stack that way.

My point is that in using the binary drivers you are laden to the card supports 
they choose, and you will eventually end up using the open source drivers once 
they decide it is no longer worth their effort to support the card. This holds 
true for both nVidia and ATI.

Ben

http://bugs.gentoo.org/show_bug.cgi?id=133609



From: App Deb appde...@gmail.com
To: gentoo-user@lists.gentoo.org
Sent: Tue, July 27, 2010 5:16:46 PM
Subject: Re: [gentoo-user] Re: State of Radeon drivers

Nvidia's binary can't be compared to ATI's one. The problems you describe are 
ATI-binary specific.


And yes the nvidia binary replaces a lot of Xorg stuff, but after some time 
you 
will realise that this is a good thing, as the Xorg is a mess, breaks with 
updates, and introduces bugs with each release. And because developers know 
that, they always prepare their software for nvidia, as it is the only 
*serious* 
graphics solution for *nix right now.


Don't get me wrong, I don't even have an nvidia card in my systems right now 
(cause ATI are superior in windows, all my systems have ATI), but I miss the 
times that I had one. So much more stuff worked without problems and with 
better 
performance.


On Tue, Jul 27, 2010 at 4:42 PM, BRM bm_witn...@yahoo.com wrote:

That's great so long as nVidia supports your card. The problem with the binary 
drivers is that they typically only support a percentage of all the cards the 
video maker makes.
For example, I can't use the ATI binary driver on my laptop since it no 
longer 
supports the R250 chipset, only their latest 3 or 4 generations of cards. So 
I 
have to use the OSS driver, which works great with it.
I have been able to use both the OSS and proprietary drivers on my desktop 
with 
an nVidia card, but I don't know how much longer that will last.

nVidia's proprietary driver is good namely because it is the same at the core 
as 
on Windows and Mac, and they wrap it to make it work with the *nix kernels. 
However, they also do a lot of other funky stuff and keep people from being 
able 
to fully use the  full extend of X. Just search this list (among others) for 
xRanderer and other components of X and you'll see the full story of nVidia's 
proprietary driver.

Ben



From: App Deb appde...@gmail.com
To: gentoo-user@lists.gentoo.org
Sent: Tue, July 27, 2010 5:29:10 AM
Subject: Re: [gentoo-user] Re: State of Radeon drivers


If you are going to use any *nix, nvidia is the best option for years now. 
The 
nvidia closed source drivers are of professional quality and have great 
performance. Actually they are the *standard* for graphics in *nix, and many 
(professional or not) applications actually support only nvidia.


The ati oss driver is still under development, sometimes it works ok, 
sometimes 
not, and it is mostly for basic desktop usage and in my opinion it is 
progressing too slow. Anyway, I don't like having a driver that uses 10% of 
my 
hardware's capabilties. So until it actually reaches 100% (like the rest of 
the 
linux drivers) I can't recommend ATI on linux and nvidia is the way to go.


On Mon, Jul 26, 2010 at 7:32 PM, Florian Philipp 
li...@f_philipp.fastmail.net 
wrote:

Am 26.07.2010 01:01, schrieb James:

 Florian Philipp lists at f_philipp.fastmail.net writes:


 I have a quick question: I plan to buy a notebook with an ATI Mobility
 Radeon HD 4250. How well would that one work? Can I reasonably expect
 Suspend2Ram, 3d acceleration etc to work stable?

 Well, lots of good information previously posted. Here's a
 few more tidbits. When ATI video get's older, there's
 always good opensource solutions to keep using it. Nvidia,
 sometimes you toss in garbage can, or use vesa or
 get lucky? Dunno, as I personally avoid Nvidia; other
 insist on Nvidia. kinda a religious thing with some.


Hehe, religious is the right word. I remember a situation at my
workplace: The admin of our departement IT ordered a Linux workstation
with (fully supported) ATI graphics. At the last second he was overruled
by the head of our institute's IT in favor of a completely unsupported
and more expensive NVidia card. Not only did the poor guy have to wait
two more weeks for the shipment to arrive, he was also stuck with the
VESA driver for half a year and unstable NVidia drivers ever since.

Well, thanks everyone who answered! Problem solved.

Florian Philipp





Re: [gentoo-user] Re: State of Radeon drivers

2010-07-27 Thread BRM
That's great so long as nVidia supports your card. The problem with the binary 
drivers is that they typically only support a percentage of all the cards the 
video maker makes.
For example, I can't use the ATI binary driver on my laptop since it no longer 
supports the R250 chipset, only their latest 3 or 4 generations of cards. So I 
have to use the OSS driver, which works great with it.
I have been able to use both the OSS and proprietary drivers on my desktop with 
an nVidia card, but I don't know how much longer that will last.

nVidia's proprietary driver is good namely because it is the same at the core 
as 
on Windows and Mac, and they wrap it to make it work with the *nix kernels. 
However, they also do a lot of other funky stuff and keep people from being 
able 
to fully use the full extend of X. Just search this list (among others) for 
xRanderer and other components of X and you'll see the full story of nVidia's 
proprietary driver.

Ben



From: App Deb appde...@gmail.com
To: gentoo-user@lists.gentoo.org
Sent: Tue, July 27, 2010 5:29:10 AM
Subject: Re: [gentoo-user] Re: State of Radeon drivers

If you are going to use any *nix, nvidia is the best option for years now. The 
nvidia closed source drivers are of professional quality and have great 
performance. Actually they are the *standard* for graphics in *nix, and many 
(professional or not) applications actually support only nvidia.


The ati oss driver is still under development, sometimes it works ok, 
sometimes 
not, and it is mostly for basic desktop usage and in my opinion it is 
progressing too slow. Anyway, I don't like having a driver that uses 10% of my 
hardware's capabilties. So until it actually reaches 100% (like the rest of 
the 
linux drivers) I can't recommend ATI on linux and nvidia is the way to go.


On Mon, Jul 26, 2010 at 7:32 PM, Florian Philipp 
li...@f_philipp.fastmail.net 
wrote:

Am 26.07.2010 01:01, schrieb James:

 Florian Philipp lists at f_philipp.fastmail.net writes:


 I have a quick question: I plan to buy a notebook with an ATI Mobility
 Radeon HD 4250. How well would that one work? Can I reasonably expect
 Suspend2Ram, 3d acceleration etc to work stable?

 Well, lots of good information previously posted. Here's a
 few more tidbits. When ATI video get's older, there's
 always good opensource solutions to keep using it. Nvidia,
 sometimes you toss in garbage can, or use vesa or
 get lucky? Dunno, as I personally avoid Nvidia; other
 insist on Nvidia. kinda a religious thing with some.


Hehe, religious is the right word. I remember a situation at my
workplace: The admin of our departement IT ordered a Linux workstation
with (fully supported) ATI graphics. At the last second he was overruled
by the head of our institute's IT in favor of a completely unsupported
and more expensive NVidia card. Not only did the poor guy have to wait
two more weeks for the shipment to arrive, he was also stuck with the
VESA driver for half a year and unstable NVidia drivers ever since.

Well, thanks everyone who answered! Problem solved.

Florian Philipp




Re: [gentoo-user] xorg-server 1.7.

2010-04-10 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 To: gentoo-user@lists.gentoo.org
 Sent: Sat, April 10, 2010 9:14:56 AM
 Subject: Re: [gentoo-user] xorg-server 1.7.
  After that you kann kill X without disturbing the kernel (and risk your 
  data) with ALT-Backspace. You will get back a console. Log in as root 
  and do a telinit 2 since the setuo still think of running runlevel 5 
  without X and this is not a sane setup: Runlevel 5 is with X and 
  runlevel 2 is without X.
 This is Gentoo, not Red Hat.

And isn't it:

Run Level 1 - Single user w/o network
Run Level 2 - Multiuser w/o network
Run Level 3 - Multiuser w/ network
Run Level 4 - Multiuser w/ network+X
Run Level 5 - unassigned (typically same as Run Level 3)

So it would have to think it's in Run Level 4 not 5, which is still a sane 
setup, just incorrect by convention.

Or did LSB change the conventions?

Ben





Re: [gentoo-user] Re: Who believes in cylinders?

2010-02-27 Thread BRM
- Original Message 

 From: walt w41...@gmail.com
 To: gentoo-user@lists.gentoo.org 
 On 02/26/2010 06:23 PM, BRM wrote:
  From: Mark Knecht
  On Fri, Feb 26, 2010 at 4:09 PM, walt wrote:
  Is there really any need for the cylinder these days?
  Who cares what cylinder it's on, and
  who cares which head is getting the data? It doesn't matter to us
  users...
  ...Boot Loader writers (e.g. grub) need to care about it since LBA
  is not quite available right away - you have to focus on other things
  until you can load the rest of the boot loader.
 Ah, this may be a big part of what's confusing me because I've done a
 lot of playing around with grub.
 At what point *does* LBA become available, and who makes it available?
 Is this one of those stupid BIOS things?

It becomes available once you can start processing enough instructions to 
support it.

Boot Loaders are typically broken into two or three parts: Stage 1, Stage 2, 
and (optionally) Stage 3.
Stage 1 focuses solely on loading Stage 2, and has historically been limited to 
a total size[1] to 512 bytes, actually 510 bytes as you should have a 2 byte 
bootable id at bytes 511 and 512). This limitation is primarily because the 
BIOS loads just the first sector on the bootable disk (identified by those two 
bytes) into memory. Now perhaps someone may be able to write assembly craftily 
enough to use LBA in the first stage.

Usually LBA is enabled in Stage 2 - since more code space is available you can 
do more - enable protected/long mode, page memory, load more code from disk, 
etc; so you don't have as many limitations. Intel/etc. were looking to change 
this somewhat with the EFI BIOSes, but I'm not sure that succeeded.

You'll have to talk to the grub/lilo/etc guys to get a better feel for this; 
but that's what I'm aware of.

Ben

[1] on x86 at least, other systems (e.g. Sun SPARC) build more into the BIOS 
to boot
loaders get more functionality faster and don't have to worry about it.




Re: [gentoo-user] Re: Removing KDE 3.5? Or reason to keep it around?

2010-02-27 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
 To: gentoo-user@lists.gentoo.org
  On Sat, 27 Feb 2010 11:12:18 -0600, Dale wrote:
  Did you mean to say that you CAN'T break a system be editing world
  because the critical packages aren't in there?
  indeed I did.
  At least I am not the only one that leaves out the NOT sometimes.
  The difference is, I don't blame it on hal :P
 But hal doesn't do the typing, we do.  Poor old hal broke my rig and it 
 just doesn't work so, I broke it.  I guess removing it could be called 
 breaking it.  ;-)

Wouldn't that be a break-up?
;-)

Ben

P.S. Thanks all for the help on this one. Got everything straightened out, and 
KDE3.5 has been removed. :D





[gentoo-user] Backups...

2010-02-27 Thread BRM
Well, now that I've got my systems cleaned up, and KDE3 removed, I'm tackling 
another project I've been meaning to do - backups.

Here's my basic plan:
- I've got a directory on my server that I want to synchronize several systems 
with (some linux, and one Windows).
- I want clients to push the backup; and not the server to pull it.
- Clients may backup more than once a month.
- the server will receive an additional backup itself once a month which 
includes all the client backups (may be more often, not sure).

At least on the Linux Systems, I've settled to using rsync for the backup - 
easy enough to do. I'm already running an rsync server for hosting portage, so 
it's relatively trivial to add another rsync module to support that way, though 
I'm not sure what the best way is.

rsync in attractive since it will do delta transfers to keep things in sync; 
though if I could use scp the same way I probably would since I would just have 
to setup appropriate keys.

Any how...I setup the rsync daemon with a read-write section. Tested it, and it 
worked. But I'd really like to have it secured - I don't want anyone to be able 
to read/write to it. So I tried adding the following:

[backup]
 uid = backup user
gid = backup group
 path = /path/to/backup/repo
 read only = false
 list = false
 auth users = user
 secrets file = /path/to/rsyncd.secrets

The rsyncd.secrets is simple:
user:8 digit password

If I don't have the last two lines (e.g. auth user, secrets file) then I can 
write to it.
Otherwise I get an authentication error:

@ERROR: auth failed on module backup
rsync error: error starting client-server protocol (code 5) at main.c(1503) 
[sender=3.0.6]

I'm uploading via:

rsync -a --password-file=rsync.passwd someTestFile 
rsync://user@host/backup/extra/path/

rsync.passwd contains the same 8 digit password, nothing else.


I've already checked file permissions - the entire directory structure under 
/path/to/backup/repo is owned by backup user:backup group.

What am I doing wrong?
Is there a better approach?

TIA,

Ben




Re: [gentoo-user] Re: Removing KDE 3.5? Or reason to keep it around?

2010-02-26 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
 To: gentoo-user@lists.gentoo.org
 chrome://messenger/locale/messengercompose/composeMsgs.properties:
  On 02/26/2010 06:06 AM, BRM wrote:
  I am quite happy with KDE4 - presently using KDE 4.3.5. I still have KDE 
 3.5.10 installed, and am wondering how much longer I need to keep it 
 around...I 
 probably use all KDE4 apps, though there might be a few here or there that I 
 use 
 on a rare occasion that are still KDE3 based...may be...and no, I don't plan 
 on 
 using KDE Sunset Overlay[1]
  Any how...I'm wondering what the best method to remove KDE3.5 safely is:
  1) Just leave it and may be it'll just get removed?
  2) Found this entry on removing it
 http://linuxized.blogspot.com/2008/10/how-to-unmerge-kde-3-packages-if-their.html
  
  But nothing registers as a 'dup' even though qlist does show a lot of KDE 
 3.5.10 packages installed. (Yeah, I'd need to modify the line to ensure it 
 doesn't remove KDE 4.3.5).
  3) Gentoo KDE4 guide suggests a method, but it seems to be more related to 
 removing KDE entirely...
  http://www.gentoo.org/proj/en/desktop/kde/kde4-guide.xml
  If you keep your world file (/var/lib/portage/world) tidy, simply deleting 
  all 
 lines with KDE3 packages and running emerge -a --depclean will take care of 
 it.
  You *do* keep your world file tidy, don't you? :P
 That would be the easiest method.  If you use the kde-meta package like I do, 
 just remove the one for KDE 3 and let --depclean do its thing.  It should get 
 all of it.

I actually don't touch the world file, and just do the 'emerge world -vuDNa' 
for updates.
From my POV, that is emerge/Portage's job - not mine.

Aside from that, I'm not sure I have ever really run emerge --depclean, but I 
also
rarely uninstall anything, but don't install things left or right to try out 
either, so
typically upgrades are all I need to do.

Having just done a compiler upgrade, I can say that there are roughly 1100 
packages (emerge -eav) in world that were recompiled.

I was just contemplating - KDE4 is stable, and I don't see myself running KDE3 
again; so why keep it around.
If 'emerge world -vuDNa' will remove it when it gets pushed off the main trunk, 
then that's probably fine with me - since that seems to not be very far out now.
If not, then I definitely want to remove it now as there is no other reason for 
keeping it around.

 If you want to keep something tho, you need to add it to the world file first 
 and then run --depclean.  That way it will keep the program(s) you want and 
 the 
 things they depend on but remove everything else.  This will save you from 
 having to reinstall those packages.  You may even have to get them from the 
 overlay at that point.  So don't uninstall something you want to keep.

That's the only issue. My only concern is software (e.g. KDevelop) that may not 
have been updated to KDE4 yet. (Not a fan of KDevelop3; waiting to see how 
KDevelop4 is going to shape up.)
 
 If you have the drive space, you can leave it there for a while longer tho.  
 Just keep in mind that there are no security updates or anything like that.  
 If 
 you add the overlay, you will get a few updates at least.

I do have the disk space on the systems I have KDE3 and KDE4 on; so that's not 
a concern.

Ben





Re: [gentoo-user] Re: Removing KDE 3.5? Or reason to keep it around?

2010-02-26 Thread BRM
- Original Message 

 From: Alex Schuster wo...@wonkology.org
 To: gentoo-user@lists.gentoo.org
 BRM writes:
If you keep your world file (/var/lib/portage/world) tidy, simply
deleting all lines with KDE3 packages and running emerge -a --
depclean will take care of it.
You *do* keep your world file tidy, don't you? :P
   That would be the easiest method.  If you use the kde-meta package
   like I do, just remove the one for KDE 3 and let --depclean do its
   thing.  It should get all of it.
  I actually don't touch the world file, and just do the 'emerge world
  -vuDNa' for updates. From my POV, that is emerge/Portage's job - not
  mine.
 I'd also leave the world file alone, and emerge -C the packages I want 
 removed.

Yep, that's what I do.
 
  Aside from that, I'm not sure I have ever really run emerge
  --depclean, but I also rarely uninstall anything, but don't install
  things left or right to try out either, so typically upgrades are all
  I need to do.
  Having just done a compiler upgrade, I can say that there are roughly
  1100 packages (emerge -eav) in world that were recompiled.
  I was just contemplating - KDE4 is stable, and I don't see myself
  running KDE3 again; so why keep it around. If 'emerge world -vuDNa'
  will remove it when it gets pushed off the main trunk, then that's
  probably fine with me - since that seems to not be very far out now.
  If not, then I definitely want to remove it now as there is no other
  reason for keeping it around.
 KDE3 is no longer in the portage tree, it's in the kde-sunset overlay.
 
 World updates do not remove things, you need to use emerge --depclean for 
 this. It will probably want to remove a lot when you never depcleaned 
 before, so be sure to check. Put the stuff you want to keep in your world 
 file with emerge -n, then depclean the rest. I guess it will remove your 
 whole KDE3 that is no longer in portage. If you like to keep it, add the 
 kde-sunset overlay with laymanl, and maybe emerge kde-base/kde-meta:3.5.

Thanks. That's what I needed to know.
 
  That's the only issue. My only concern is software (e.g. KDevelop) that
  may not have been updated to KDE4 yet. (Not a fan of KDevelop3;
  waiting to see how KDevelop4 is going to shape up.)
 The KDE4 version is in the kde overlay, but I do not know if it is usable 
 already.

For now, I'll wait. I mostly use vim; and having done a lot of Windows stuff 
for work I am familiar with VS.
While there are a lot of things I don't like about VS, nothing else seems to 
quite compare.
KDevelop3 at least drove me nuts; and Eclipse just doesn't do well when you're 
not programming in Java - I have yet to get CDT to work, though I've mostly 
tried on Windows.
QtCreator seems to be on the right track, though it's still quite early.

I'm interested to see how KDevelop4 is going to turn out, but I'll certainly 
wait for it to reach the mainline tree.

Thanks for the info.

Ben





Re: [gentoo-user] Advice/best practices for a new Gentoo installation

2010-02-26 Thread BRM
- Original Message 

 From: Paul Hartman paul.hartman+gen...@gmail.com
 To: gentoo-user@lists.gentoo.org
 Some topics I'm thinking about (comments welcome):
 - be aware of cylinder boundaries when partitioning (thanks to the
 recent thread)
 - utilizing device labels and/or volume labels instead of hoping
 /dev/sda stays /dev/sda always

I've never had an issue with /dev/sda changing, but I don't change out hard 
drives a lot either.
If you're doing hot-pluggable systems may be. But it typically does the right 
thing.

I haven't gotten around to do doing it yet, but one thing I did think about was 
setting up udev to recognize certain external hard drives for use - e.g. always 
mapping a backup hard drive to a certain location for backups instead of the 
normal prompting.

 - initrd - I've never used one, but maybe it's needed if root is on
 software RAID?

You only need initrd if you can't build a kernel with everything needed to boot 
up - namely, when you need to load specialized firmware to access the hard 
drive or if you are doing net-booting.

 - grub/kernel parameter tips and tricks... i'm already using uvesafb,
 and don't dual-boot with MSWin or anything, just Gentoo

I typically make sure to alias or map a default that should always work. It's 
my standard boot up unless Im testing out a new kernel build.
When I do an update, I add the update to the list without modifying the default 
until I've verified that the updated kernel is working.
Works better under LILO than grub if I recall.

 - better partitioning scheme than my current root, boot, home (need
 portage on its own, maybe /var as well?)

I have taken to putting portage on its own partition to keep from filling up 
the root partition, which I've done on a few systems more than once.
So yes, definately +5.

 - best filesystem for portage? something compressed or with small
 cluster size maybe.

1. Stay away from reiserfs. Yeah, I know there's a big fan base for it; but 
it's not so big in the recovery distro area.
2. Ext2/3 are now more than sufficient and supported out-of-the-box by nearly 
all recovery distros. I haven't tried Ext4 yet, but it seems very able as well.

From various things I've seen, XFS or JFS is about the only real FS to offer 
benefits where it kind of makes sense.
But for the most part, Ext2/3/4 will probably more than suffice for most 
everyone's need; and when it doesn't - you're typically doing something where 
you need to find the right one out of numerous for a specialized area of use, 
in which case, general recommendations don't cut it.

(Why care about recovery disks: B/c you never know when you're going to need to 
access that partition.)

 - SSD vs 1rpm vs big-and-cheap hard drive for rootfs/system files.
 I lean toward the latter since RAM caches it anyway.

I lean towards just going the standard 10k hard drives with lots of cache; 
though I typically only buy the middle-line Western Digitals (upper-line being 
the server hard drives).

 - omit/reduce number of reserved-for-root blocks on partitions where
 it's not necessary.
 - I have never used LVM and don't really know about it. Should I use
 it? will it make life easier someday? or more difficult?

I tried out LVM (LVM2) thinking it would kind of make sense. I still have one 
system using it; but I ended up abandoning it.
Why? Recovery is a pita when something goes wrong. Not to say it isn't 
flexible, but for most people LVM is unnecessary, kind of like RAID.

 - Is RAID5 still a good balance for disk cost vs usable space vs data
 safety? I can't/don't want to pay for full mirroring of all disks.

RAID is not really necessary for most people. Save it for sections on doing 
backups - e.g. setting up a drive to backup to that gets mirrored off - or 
server support, where RAID is necessary.
But most users don't need RAID.
 
 Or any other tips that apply to things which are difficult to change
 once the system is in use.

KISS.
 
Ben





Re: [gentoo-user] Re: Removing KDE 3.5? Or reason to keep it around?

2010-02-26 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 To: gentoo-user@lists.gentoo.org
 On Fri, 26 Feb 2010 06:34:18 -0800 (PST), BRM wrote:
  Aside from that, I'm not sure I have ever really run emerge
  --depclean, but I also rarely uninstall anything, but don't install
  things left or right to try out either, so typically upgrades are all I
  need to do.
 You should still run --depclean as dependencies change and you could still
 have plenty of no longer needed ones installed.

Okay - so I ran emerge --depclean -a and got the below.
I tried running emerge world -vuDNa as specified, but that didn't resolve it 
either.

I tried looking in the world file (/var/lib/portage/world) but didn't find any 
entries that felt safe to remove.
So, how do I resolve?

TIA,

Ben

Calculating dependencies... done!
 * Dependencies could not be completely resolved due to
 * the following required packages not being installed:
 * 
 *   ~x11-libs/qt-test-4.4.2 pulled in by: 
 * x11-libs/qt-4.4.2   
 * 
 *   ~x11-libs/qt-sql-4.4.2 pulled in by:  
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-webkit-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-assistant-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-gui-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-xmlpatterns-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~dev-libs/poppler-0.10.7 pulled in by:
 * virtual/poppler-0.10.7
 *
 *   ~x11-libs/qt-opengl-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-qt3support-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-dbus-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-svg-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 *   ~x11-libs/qt-script-4.4.2 pulled in by:
 * x11-libs/qt-4.4.2
 *
 * Have you forgotten to run `emerge --update --newuse --deep world` prior
 * to depclean? It may be necessary to manually uninstall packages that no 
longer
 * exist in the portage tree since it may not be possible to satisfy their
 * dependencies.  Also, be aware of the --with-bdeps option that is documented
 * in `man emerge`.




Re: [gentoo-user] Who believes in cylinders?

2010-02-26 Thread BRM
- Original Message 

 From: Mark Knecht markkne...@gmail.com
 To: gentoo-user@lists.gentoo.org
 On Fri, Feb 26, 2010 at 4:09 PM, walt wrote:
  Is there really any need for the cylinder these days?
 No, not as I understand it.
 There may be some bits of software that suggest they can use them, but
 I think with the advent of LBA directly addressing CHS is now retired
 with only sector addressing being important due to the way the data is
 physically placed on the drive. Who cares what cylinder it's on, and
 who cares which head is getting the data? It doesn't matter to us
 users...

True user's don't care. However, Boot Loader writers (e.g. grub) need to care 
about it since LBA is not quite available right away - you have to focus on 
other things until you can load the rest of the boot loader.

So it's not 100% dead, but yes - most things no longer need to care about it.

Ben





Re: [gentoo-user] Re: Removing KDE 3.5? Or reason to keep it around?

2010-02-26 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
  - Original Message 
  From: Neil Bothwick
  To: gentoo-user@lists.gentoo.org
  On Fri, 26 Feb 2010 06:34:18 -0800 (PST), BRM wrote:
  Aside from that, I'm not sure I have ever really run emerge
  --depclean, but I also rarely uninstall anything, but don't install
  things left or right to try out either, so typically upgrades are all I
  need to do.
 
  You should still run --depclean as dependencies change and you could still
  have plenty of no longer needed ones installed.
   
  Okay - so I ran emerge --depclean -a and got the below.
  I tried running emerge world -vuDNa as specified, but that didn't resolve 
  it 
 either.
 
  I tried looking in the world file (/var/lib/portage/world) but didn't find 
  any 
 entries that felt safe to remove.
  So, how do I resolve?
 
  Calculating dependencies... done!
* Dependencies could not be completely resolved due to
* the following required packages not being installed:
*
*   ~x11-libs/qt-test-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-sql-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-webkit-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-assistant-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-gui-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-xmlpatterns-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~dev-libs/poppler-0.10.7 pulled in by:
* virtual/poppler-0.10.7
*
*   ~x11-libs/qt-opengl-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-qt3support-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-dbus-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-svg-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
*   ~x11-libs/qt-script-4.4.2 pulled in by:
* x11-libs/qt-4.4.2
*
* Have you forgotten to run `emerge --update --newuse --deep world` prior
* to depclean? It may be necessary to manually uninstall packages that no 
 longer
* exist in the portage tree since it may not be possible to satisfy their
* dependencies.  Also, be aware of the --with-bdeps option that is 
 documented
* in `man emerge`.
 
 
 
 I ran into this a long time ago and I added this to my make.conf so that 
 I don't forget.  Try running this:
 
 emerge -uvDNa --with-bdeps y world
 
 Then see what that does.  That added bit makes it look deeper into 
 dependencies. 

Okay, tried that. I did install 7 more packages (1 new, 6 rebuilds/updates).
But it didn't resolve the problem. I'm trying to determine if qt-4.4.2 is even 
installed.
Looking at /usr/lib there doesn't appear to be any qt-4.4.2 libs, only qt-4.5.3.

qlist -Ia | grep x11-libs | grep qt  only returns the following:
/usr/qt/3/etc/settings/.keep_x11-libs_qt-3
/etc/qt4/.keep_x11-libs_qt-core-4

Also, find | grep libQt | grep 4\.4 didn't return anything, while find | 
grep libQt | grep 4\.5 returned what I saw in /usr/lib/qt4.

So, then is x11-libs/qt-4.4.2 even installed? If not, how do I get rid of this?

TIA,

Ben

 Like you, I thought that was what the -D did but 
 apparently that only goes to a certain depth then stops.  If you want to 
 add that to make.conf like I did, this is what goes there:
 
 EMERGE_DEFAULT_OPTS=--with-bdeps y
 
 It's your choice whether to add that or not.  It will make emerge 
 process what needs to be updated a while longer tho.
 
 Dale
 
 :-)  :-)





Re: [gentoo-user] Re: Removing KDE 3.5? Or reason to keep it around?

2010-02-26 Thread BRM
- Original Message 

 From: Dale rdalek1...@gmail.com
  On 02/27/2010 04:15 AM, BRM wrote:
  From: Neil BothwickTo:
  (PST), BRM wrote:
  Aside from that, I'm not sure I have ever really run emerge
  --depclean, but I also rarely uninstall anything, but don't
  install things left or right to try out either, so typically
  upgrades are all I need to do.
  You should still run --depclean as dependencies change and you
  could still have plenty of no longer needed ones installed.
  Okay - so I ran emerge --depclean -a and got the below. I tried
  running emerge world -vuDNa as specified, but that didn't resolve
  it either.
  I tried looking in the world file (/var/lib/portage/world) but didn't
  find any entries that felt safe to remove.
  Safe as to what?  If something is in the world file that you didn't 
 explicitly request, then it doesn't belong there.  For example, if you have 
 x11-libs/qt-gui in world, you should delete it.  The world file should not 
 contain dependencies, it should only contain the stuff you emerged directly.

Okay...that kind of makes more sense now.
From what I've read in the past, modifying 'world' would be a big no-no, and 
very risky - so I never touched it - also why I never really ran 'emerge 
--depclean', which is reporting some 400 packages to remove now that I've got 
that cleaned up.

  To give an example, if you emerge media-video/smplayer, then that one 
  will 
 end up in the world file.  But smplayer will also pull-in qt and mplayer.  
 Those 
 do not go in the world file.  When you unmerge smplayer again, qt and mplayer 
 will not be unmerged unless you run emerge --depclean.  However, if qt and 
 mplayer end up being in the world file anyway, it means you made a mistake at 
 some point; like emerging something that is a dependency but forgot to 
 specify 
 the -1 (or --oneshot) option to emerge.
  So if you see something in the world file that you know don't need directly 
 (and I doubt you need qt directly; KDE for example needs it, you, as a 
 person, 
 don't) it's safe to remove.
  Of course always make a backup first :P
 If I edit the world file and I am not sure, I always run -p --depclean.  That 
 should tell you if you are about to make a boo boo. The package you removed 
 will 
 be cleaned out but so will other things.  If it starts to remove something 
 that 
 you know you want to keep, then you need to figure out why that entry was 
 there 
 and what can be put in the world file to keep the things you do want.
 The example Nikos used is a good one.  If you decide you don't want smplayer 
 but 
 want to use mplayer, then you would need to add mplayer to the world file so 
 that it will stay but --depclean will remove smplayer when you run --depclean.
 Nikos is correct on the -1 option tho.  That is the same as --oneshot by the 
 way.  That is the biggest reason that something ends up in the world file 
 that 
 shouldn't be there.  I would just about bet that we have all forgot the -1 
 option more than once.  It doesn't matter how long a person has used Gentoo, 
 it 
 just happens.

True. I never really understood the --oneshot thing before, but now that makes 
sense.
I did it when directions said to, but not really otherwise. Well, now I know...

TIA,

Ben





Re: [gentoo-user] Portage GUI interfaces...

2010-02-25 Thread BRM
- Original Message 

 From: Alan McKinnon alan.mckin...@gmail.com
 On Thursday 25 February 2010 05:14:06 ubiquitous1980 wrote:
  BRM wrote:
   I am interested in finding a GUI interface for working with portage,
   preferably for KDE4. Namely b/c I am getting a little tired of having
   konsole windows open and not being able to keep track of where I am in
   the emerge update process - something a GUI _ought_ to be able to
   resolve.
  In the case that there is not a GUI tool, why not run as root # tail -f
  /var/log/emerge.log
  It tells you what is being installed at present
  Another, to see what you are downloading is # tail -f
  /var/log/emerge-fetch.log
 Set the terminal up so that it displays the running command in the titlebar.

I do have that setup - and it _use_ to work just fine. However, now it just 
shows python 2.6.
So I'm looking for a better solution.

The 'genlop' tool that Dale suggested looks like a great find; so looks like I 
have something even if I don't find a GUI.

- Original Message 
 From: Ronan Arraes Jardim Chagas 
roni...@gmail.com
 Me and Locke Shinseiko are developing a graphical tool to make 
portage daily 
 tasks easier. It is called KPortageTray and you 
can find it on app-
 portage/kportagetray at kde overlay.
snip

While I like the idea of having something in the system-tray, I'm not a fan of 
you solution.
I'd much rather have a nice GUI interface wrapped around it all.

BTW, you have the same problem if the SSH connection or KConsole is closed 
during an emerge.
You can get around that using 'screen' kind of, but needless to say - just 
using KConsole doesn't solve the problem.

Also, with KDE/Qt you can wrap processes (QProcess, not sure what the KDE 
wrapper is) and get their I/O,
so you could probably try wrapping a detached process of the desired command 
and then just pull its I/O.
I would be surprised if the other GUIs were not either doing that or directly 
working with emerge/portage via an API (if there is one aside from the 
command-line).

Ben





[gentoo-user] Removing KDE 3.5? Or reason to keep it around?

2010-02-25 Thread BRM
I am quite happy with KDE4 - presently using KDE 4.3.5. I still have KDE 3.5.10 
installed, and am wondering how much longer I need to keep it around...I 
probably use all KDE4 apps, though there might be a few here or there that I 
use on a rare occasion that are still KDE3 based...may be...and no, I don't 
plan on using KDE Sunset Overlay[1]

Any how...I'm wondering what the best method to remove KDE3.5 safely is:

1) Just leave it and may be it'll just get removed?

2) Found this entry on removing it
http://linuxized.blogspot.com/2008/10/how-to-unmerge-kde-3-packages-if-their.html

But nothing registers as a 'dup' even though qlist does show a lot of KDE 
3.5.10 packages installed. (Yeah, I'd need to modify the line to ensure it 
doesn't remove KDE 4.3.5).

3) Gentoo KDE4 guide suggests a method, but it seems to be more related to 
removing KDE entirely...
http://www.gentoo.org/proj/en/desktop/kde/kde4-guide.xml

TIA,

Ben

[1] Though I do agree it should be kept around.





[gentoo-user] Portage GUI interfaces...

2010-02-24 Thread BRM
I am interested in finding a GUI interface for working with portage, preferably 
for KDE4.
Namely b/c I am getting a little tired of having konsole windows open and not 
being able to keep track of where I am in the emerge update process - something 
a GUI _ought_ to be able to resolve.

In googling, I noticed Kuroo, but it's no longer maintained (nearly 2 years out 
of date now, so it would have to have been KDE3) so it's been understandably 
removed from mainline portage - though I also noticed information on a Kuroo 
overlay.

And I also came across Porthole; however, all versions are marked 
Unstable/Testing (~) at the moment.

Can anyone give some advice on these or others?


Ben





Re: [gentoo-user] Has semantic-desktop really become compulsatory for kmail?

2010-02-17 Thread BRM
- Original Message 

 From: Mick michaelkintz...@gmail.com
 I also happen to own a couple of old PCs which I try to keep lean and I don't 
 mind the odd double declutching to change gears.  Now, I understand the 
 development philosophy of KDE4 since this was very well explained, but that 
 does not stop me wishing that the developers were a bit more modular in their 
 approach.  This is because I would like to use a few KDE apps, but do not 
 want 
 to have to download and install a load of ever increasing dependencies.  I am 
 after a pick 'n mix from the sweet shop, rather than being 'forced' to have 
 one of each.

All I can say is try submitting a patch to the KDE folk.
They're not setting out to support that kind of environment, but you never know 
what kinds of patches they'll take.

They are looking at low-end systems and scalability (read asiego's blog for 
info) - from phones to netbooks to laptops/desktops to servers.

So if you want to run KDE4 on those lean+mean systems, check with them - 
there's probably a branch of KDE4 you can use.

Just 2 cents.

Ben





Re: [gentoo-user] Re: Has semantic-desktop really become compulsatory for kmail?

2010-02-12 Thread BRM
- Original Message 

 From: Zeerak Waseem zeera...@gmail.com
 On Fri, 12 Feb 2010 10:53:04 +0100, Neil Bothwick wrote:
  On Fri, 12 Feb 2010 05:19:43 +0100, Zeerak Waseem wrote:
  But I do find it silly, that the various applications that aren't
  dependent of the DE, to require a dependency of the DE. It just seems
  a bit backwards to me :-) I simply don't understand.
  That just shows that they are still partially dependent on the DE, KMail
  also needs various KDE libraries. KDE was designed as a cohesive DE, not
  just a bunch of applications with a common look and feel. KDE apps are
  intended to be run on a KDE desktop, anything else is a nice bonus.
 Indeed, and it is a noble pursuit.
 But from a marketing aspect, it would make more sense to have things that 
 aren't 
 -vital- for the app, unlike kde-libs in this case, to be soft (is this the 
 correct term?) dependencies.
 Both aspects could be satisfied by having symantic-desktop as an optional 
 dep. 
 It's not a vital function for kmail to be able to tag and index all the files 
 on 
 the computer (which is what the symantic-desktop does if I understand 
 correctly), it's a nifty thing for KDE users, and soon probably Gnome users 
 as 
 well, but for anyone else, it's a nifty thing -if- they feel the need for it. 
 Much like most other bits of software :-)

Obviously you don't understand the reason for the dependency.
It does not exist so that Kmail can index all the files on the system but for 
the opposite - 
so that Kmail can participate in the search by allowing the system to be able 
to search _its_ data.

And, btw, you're not turning it off within Kmail, but at the system - DE - 
level.
The application itself will still check to see if it could participate, only to 
have nothing turned on to support so then it doesn't do anything.
 
 In the end there isn't a right or wrong, but just a standpoint.

Question: are you a software developer?

Kmail probably has the dependency the way they do b/c it is far easier to make 
it one and let the system determine not to support the functionality
than it is to litter the codebase with if (symanticDesktopEnabled)... code.

 Some don't mind 
 the bloat (we can agree that it's bloat if you're just going to disable the 
 function as soon as it's been installed, right?) and don't consider it to be 
 the 
 slightest bit akin to bloat, whilst to others it's an unnecessary feature 
 forced 
 on them (mainly thinking of the people not using kde, but also those 
 kde-users 
 that just disable it) and thus becomes bloat.

No more than it is bloat for gcc to support mmx/sse/sse2/sse3/sse4 when your 
processor cannot.

Ben





Re: [gentoo-user] Re: Has semantic-desktop really become compulsatory for kmail?

2010-02-12 Thread BRM
- Original Message 

 From: Zeerak Waseem zeera...@gmail.com
 But then the question isn't whether there are a number of soft dependencies, 
 but 
 in the case of semantic-desktop whether -it- is a soft dependency. Like 
 previously stated, I don't use kmail, nor do I intend to (I at least think I 
 mentioned it). This is just my take on the matter of whether it is truly 
 necessary, or even a good idea to have symantic-desktop as a hard dependency.

So you are complaining why? Why even install KMail if you are not going to use 
it?

 And as stated, this is not in the light of a full blown KDE env, but mainly 
 in 
 considerations to when you're using another window manager. Be it icewm, jwm, 
 openbox or whatever. Should something that is an integrated part of the KDE 
 desktop environment be forced on those that don't use KDE?

The KDE devs in general (applications, etc.) with the exception of KOffice, and 
possibly Amarok, are all targeting their development as an integrated DE meant 
to be run under KDE.
They have been pretty clear as well that they do not intend the applications to 
be run stand-alone under other DE's (even Gnome) - that's not officially 
supported.
And this has been especially clear for KDE4 (see asiego's blog for example).

 Our opinions on this matter obviously differ, and for that simple reason I 
 find 
 it interesting to find out -why- you think it's okay that they're being 
 forced. 
 And simply stating that the devs' decided that it was how it was done, is 
 pretty 
 much as nonconstructive argument as dbus is bad because it's new. I'd like 
 to 
 find out why you seem to disagree, so please. By all means, enlighten me :-) 
 (I 
 am asking for it after all ;))

If you disagree with the devs lack of support for things beyond their 
requirements, or things that they explicitly have stated they do not support 
that is your issue.
The fact is the devs are building the application for the target environment - 
KDE4 - and no other DE (e.g. gnome, icewm, jwm, openbox, etc.).
So expect that dependencies will match what would be expected in that 
environment if you want to use the application.
Anything else is unreasonable of you as a user.

A simple analogy: The Chevy Malibu part not fitting in the Ford F150 vehicle. 
Sure, they may perform the same function in the end, but they were designed for 
completely different vehicles.

Ben





Re: [gentoo-user] Qt3 deprecated, but Qt4 still not x86 (only ~x86)???

2010-02-10 Thread BRM
- Original Message 

 From: Volker Armin Hemmann volkerar...@googlemail.com
 To: gentoo-user@lists.gentoo.org
 Sent: Wed, February 10, 2010 12:18:59 PM
 Subject: Re: [gentoo-user] Qt3 deprecated, but Qt4 still not x86 (only 
 ~x86)???
 ALSO:
 from the qt-4.5.3 ebuild:
 
 KEYWORDS=~alpha amd64 arm hppa ~ia64 ~mips ppc ppc64 -sparc x86
 
 when was the last time you sync'ed?

http://gentoo-portage.com/x11-libs/qt

shows the same thing - qt-4.5.3 is hard masked.

Ben





Re: [gentoo-user] Devicekit - especially just for Dale

2010-01-21 Thread BRM
- Original Message 

 From: pk pete...@coolmail.se
 BRM wrote:
  The point of the UI is that you ought not care what goes where, unless you 
  are 
 debugging the UI or the program itself.
  While a UI is important; a good UI is key.
 And a plain text editor is, imo, a good UI; everybody knows how to use
 it. Why bring in another extra (translation) layer?

That's only good if you always store all options - every possible combination, 
etc. - at all times.
Unfortunately, that's almost never the case.

Thus you need to be able to know how to create a good working configuration.
This requires having a tool the user can use to edit the configuration, with 
the tool
providing access to the options you otherwise would not know about that also
protects you by helping to ensure the configuration is in the valid format. Of 
course,
the tool also has to get upgraded with the changes in the program - so that it 
knows
how to build correct configurations.

This is where XML does somewhat shine for configurations - you can get by with 
a little
less by enabling the tool to use XML validation on the configuration file; then 
even if your
tool falls a little behind, it can still validate the configuration file 
against the DTD/RNG/Schema.
But it also means that you MUST have a tool.

Ben





Re: [gentoo-user] Devicekit - especially just for Dale

2010-01-20 Thread BRM
- Original Message 

 From: Alan McKinnon alan.mckin...@gmail.com
 On Tuesday 19 January 2010 22:36:45 BRM wrote:
   Or a pretty GUI with clicky boxes to change the settings while never
   letting the user see the contents of the XML.
  Once the user interface is in place it doesn't matter whether it is XML or
   something else. The key is that is has a user interface, you can do a INI
   format and still be just as crappy.
 Classic examples are the windows registry editor and gconf. My god, I hate 
 both. It seems like the devs just chomped an XML file and rotated it 90 
 degrees to get an expandable tree view.

True - good examples of horrid interfaces. Needless to say...

 Which does absolutely nothing to aid my understanding of what goes where.

The point of the UI is that you ought not care what goes where, unless you are 
debugging the UI or the program itself.

While a UI is important; a good UI is key.

BRM





Re: [gentoo-user] Devicekit - especially just for Dale

2010-01-19 Thread BRM
- Original Message 

 From: Neil Bothwick n...@digimed.co.uk
 On Tue, 19 Jan 2010 01:09:16 +0200, Alan McKinnon wrote:
   XML is a machine-readable file format that just happens to use ASCII
   characters, it is not meant to be modified by a text editor, so if
   your program uses XML configuration files, it should include a means
   of editing those files that does not include the use of vim.  
  which almost by definition means you need an xml-information parser on
  par with an xml-parser to figure out what the hell the fields mean,
  then design an intelligent viewer-editor thingy that lets the user
  add-delete-change the information in the xml file. All the while
  displaying to the user at least some information about the fields in
  view.

Making the interface for the config file - XML or otherwise - is far more 
complex and cumbersome than writing the parser (XML or otherwise).

 Or a pretty GUI with clicky boxes to change the settings while never
 letting the user see the contents of the XML.

Once the user interface is in place it doesn't matter whether it is XML or 
something else.
The key is that is has a user interface, you can do a INI format and still be 
just as crappy.

The problem is that most don't think through using the XML so much. They just 
start using it.

While I have not had any problems with HAL myself (it just works); I do agree 
that a good user interface is necessary for the config files - I'd agree that 
is the case for any program, regardless of its back-end config file format.

$0.02

Ben




Re: [gentoo-user] Wireless...

2010-01-06 Thread BRM
- Original Message 

 From: BRM bm_witn...@yahoo.com
 From: Mike Edenfield 
   On 12/2/2009 9:17 PM, BRM wrote:
   I have wireless working (b43legacy driver for the Dell Wireless Broadcom) 
 through a static configuration in /etc/conf.d/net - basically:
   essid_wlan0=myWLAN
   key_MYWLAN=somekey
   config_MYWLAN=( dhcp )
   preferred_APS= ( myWLAN )
   I would like to use a tool like WPA Supplicant instead so I can have a 
   more 
 dynamic configuration.
   I've tried to setup WPA supplicant but haven't been able to get it to 
   work.
  Probably not what you wanted to hear, but I had the exact same problem with 
 the Dell bcm-based adapter in my Inspiron laptop.
  It would work fine for open wireless and WEP-secured wireless, but wouldn't 
 associated with a WPA-secured access point.
  Eventually I spent about $30 to purchase an iwl3945 replacement from Dell, 
 which worked fine, and never looked back.
 Thanks for the heads up.
 At this point, I'll be happy if I can just get WEP working using WPA 
 Supplicant/WiCD/etc. instead of a root user centric configuration file.

Well, it seems to be something with my home network; not sure what.
Over the holidays I did some traveling and took my laptop with me.
I was able to connect to other WEP networks just fine using WPA Supplicant;
however, when I got home I couldn't get WPA Supplicant to work with my home 
network and
had to revert back to setting it up via /etc/conf.d/net.

My home wireless network is a Linksys WRT54G version 3 hardware, with slightly 
outdated software (by 1 or 2 releases).
SSID is visible. It seems to find it, but then loses it pretty quickly and I 
have to restart wlan0 before I can try again.
Works fine when using a static WEP configuration though (e.g. no WPA 
Supplicant/WiCD/etc.).

Not sure what to look at next, but this is going to drive me a bit crazy.

Ben





Re: [gentoo-user] {OT} Preparing a laptop for sale

2009-12-17 Thread BRM
From: Neil Bothwick n...@digimed.co.uk

 On Thu, 17 Dec 2009 18:49:31 +0200, Alan McKinnon wrote:
  Let's look at the obvious solution then:
  remove the hard drive containing sensitive data, replace it with a new
  one, sell laptop.
  Ka-Ching! Problem solved.
 Unfortunately, the hard drive seller gets more Ka-Ching and the OP gets
 less. It's always a trade off.

Personally, I'd just go ahead and do the DBAN route as already mentioned.
It's worth it - and easy enough to do. (I've done it for one of my work laptops 
that I purchased from work a couple years ago.)

On the other hand, if you really don't want to do that - keep your hard drive,
and sell without the hard drive. The buyer can get another one for it 
themselves.

Ben





Re: [gentoo-user] 2.6.31 vfat driver broken?

2009-12-10 Thread BRM
- Original Message 

From: Frank Steinmetzger war...@gmx.de
 Am Donnerstag, 10. Dezember 2009 schrieb Willie Wong:
   I heard romours of problems with the current FAT implementation due to
   M$. I went back to 2.6.30 for the moment. So what???s your proposal?
   Usually I don???t have the need for ??bercurrent kernels, but would
   installing 2.6.32 help?
  Did you file a bug? Where did you hear this rumour?
 I believe heise.de, the publisher of computer magazine c't. I just looked for 
 it - it was about Microsoft pursuing legal actions against TomTom for using 
 their FAT file system on their linux based devices. I deduced from that that 
 they rewrote the FAT driver, but this would seem rather unlikely.

You are referring to the Long File Name issue. Apparently MS has patented how 
the LFNs are stored
in a dual method. The fix Linux employed was to only ever allow a single method 
storage.
This issue does not affect what you are seeing, nor would it affect the FAT 
driver all that much.
From what I understand (in the various articles, emails, etc. I've seen 
on-line), there was no rewrite
of the FAT driver; just a slight disabling of some functionality (for the dual 
mode).

Ben




Re: [gentoo-user] Wireless...

2009-12-03 Thread BRM
- Original Message 

From: Mike Edenfield kut...@kutulu.org
  On 12/2/2009 9:17 PM, BRM wrote:
  I have wireless working (b43legacy driver for the Dell Wireless Broadcom) 
  through a static configuration in /etc/conf.d/net - basically:
  essid_wlan0=myWLAN
  key_MYWLAN=somekey
  config_MYWLAN=( dhcp )
  preferred_APS= ( myWLAN )
  I would like to use a tool like WPA Supplicant instead so I can have a more 
  dynamic configuration.
  I've tried to setup WPA supplicant but haven't been able to get it to work.
 Probably not what you wanted to hear, but I had the exact same problem with 
 the Dell bcm-based adapter in my Inspiron laptop.
 It would work fine for open wireless and WEP-secured wireless, but wouldn't 
 associated with a WPA-secured access point.
 Eventually I spent about $30 to purchase an iwl3945 replacement from Dell, 
 which worked fine, and never looked back.

Thanks for the heads up.
At this point, I'll be happy if I can just get WEP working using WPA 
Supplicant/WiCD/etc. instead of a root user centric configuration file.

Ben




[gentoo-user] Laptop resurrection...

2009-12-02 Thread BRM
I'm still working to get my laptop back up; I have one more thing to try.

Presently, I am having a problem with the compiling a 2.6.30-gentoo-r8 kernel 
that actually works. It might be a processor issue - linux reports it as a 
Pentium M which is what I have selected during 'make menuconfig', but the the 
grub keeps reporting that it is not a recognized format or something to that 
effect, so it won't load it.

Questions:

1) I am using the Gentoo 2007.0 LiveCD to boot with, then chroot'ing into my 
installation to build the kernel. I shouldn't need a newer LiveCD, correct?
2) Grub doesn't need to be re-run (e.g. running the grub prompt and going 
through the install procedure) after changes to the menu file, correct?

TIA,

Ben



Re: [gentoo-user] Laptop resurrection...

2009-12-02 Thread BRM
- Original Message 

From: Mick michaelkintz...@gmail.com
 2009/12/2 BRM bm_witn...@yahoo.com:
  I'm still working to get my laptop back up; I have one more thing to try.
  Presently, I am having a problem with the compiling a 2.6.30-gentoo-r8 
  kernel that actually works. It might be a processor issue - linux reports 
  it as a
  Pentium M which is what I have selected during 'make menuconfig', but the 
  the grub keeps reporting that it is not a recognized format or something to 
  that effect, so it won't load it.
  Questions:
  1) I am using the Gentoo 2007.0 LiveCD to boot with, then chroot'ing into 
  my installation to build the kernel. I shouldn't need a newer LiveCD, 
  correct?
 Correct as long as it recognise your hardware.

Thanks.

  2) Grub doesn't need to be re-run (e.g. running the grub prompt and going 
  through the install procedure) after changes to the menu file, correct?
 Correct, assuming you have installed GRUB correctly in the first instance

Thanks

  - which makes me ask:
 What is your exact error message?

I'll post that tonight.

- Original Message 
From: Marcus Wanner marc...@cox.net
 I got that error when I copied the wrong kernel image to /boot, make 
 sure you are copying the one detailed in the gentoo handbook (chapter 7, I 
 think).

The last kernel I copied in I copied the file specified by the kernel's README: 
arch/arch/boot/bzImage - arch being x86.

Though according to http://www.gentoo.org/doc/en/kernel-config.xml it should be 
arch/i386/boot/bzImage...not sure which is right off hand.
Will check into it tonight.

Ben




  1   2   >