Re: [gentoo-user] latest eix versions messes with my screen status bar

2012-03-20 Thread Michael Mol
On Tue, Mar 20, 2012 at 11:14 AM, Paul Hartman
paul.hartman+gen...@gmail.com wrote:
 Hi,

 Following a recent eix version update, after running eix-sync it
 leaves the session name on my screen status bar like:

 $eix-sync: Finished

 does anyone know anything about that?

 Is there perhaps something I can add to my shell prompt to make it
 reset the status bar title after a program exits?

I would _love_ to know the answer to this, as I've been wanting to
stick something like that in my $PS1 for months, now. (General use and
convenience, not because of eix).


-- 
:wq



Re: [gentoo-user] Changing compilers

2012-03-19 Thread Michael Mol
FEA jobs can be parallelized, right? Take a hard look at CUDA and OpenCL.

ZZ
On Mar 19, 2012 1:29 AM, Andrew Lowe a...@wht.com.au wrote:

 Hi all,
Has anyone played around with the various better known compilers on
 Gentoo? By better known, I'm referring to gcc, Intel, llvm, pathscale. My
 situation is that I've just started my PhD which requires me to do Finite
 Element Analysis, FEA, and Computational Fluid Dynamics, CFD, and I want to
 find the best compiler for the job. Before anyone says Why bother, XXX
 compiler is only 1 - 2% faster than gcc, in the context of the work I'm
 doing this 1 - 2% IS important.

 What I'm looking for is any feedback people may have on ability to compile
 the Gentoo environment, the ability to change compilers easily, gcc-config
 or flags in make.conf, as to whether the compiler/linker can use the
 libraries as compiled by gcc on a standard gentoo install and so on.
 Obviously there is much web trawling to be done to find what other people
 are saying as well.

 Any thoughts, greatly appreciated,
   Andrew Lowe





Re: [gentoo-user] Re: systemd? [ Was: The End Is Near ... ]

2012-03-19 Thread Michael Mol
On Mon, Mar 19, 2012 at 9:33 AM, Neil Bothwick n...@digimed.co.uk wrote:
 On Sun, 18 Mar 2012 02:49:56 -0600, Canek Peláez Valdés wrote:

  They ensure that there is an sshd configuration file and
  give a meaningful message (including where to find the sample) if it
  is not present, and check for the presence of the hostkeys (again
  which are needed) and create them if they are not present. Your 9
  lines of sshd.service do none of this.

 That is completely true. I also think that those checks does not
 belong into the init script: I think the configuration file presence
 should be guarantee by the package manager at install time, and so the
 creation of the hostkeys.

 sshd is a bit of a special case. Think like CDs, like SystemRescueCD. If
 the keys were created at installation time, every CD would have the same
 keys, which is not particularly desirable.

I prefer counterexample to special case ... I don't like calling
things special cases because it suggests that they're somehow more
privileged than anything else, and unnecessarily weighs against
software which hasn't been written yet.

A similar case which falls into the same kind of circumstance:
per-host IDs in mass-deployment scenarios. You see this in large
arrays of similar systems; 'sbc-a3d6' 'sbc-a3d9' 'sbc-7721' ... Heck,
applying something like that to live installation media would be nice;
not having every new install called simply 'gentoo' by default would
be very helpful in installfest scenarios. Identical hostnames screw
with DHCP-driven DDNS updates. I ran into that on my home network.

-- 
:wq



Re: [gentoo-user] Changing compilers

2012-03-19 Thread Michael Mol
On Mon, Mar 19, 2012 at 8:32 AM, Mark Knecht markkne...@gmail.com wrote:
 On Sun, Mar 18, 2012 at 10:26 PM, Andrew Lowe a...@wht.com.au wrote:
 Hi all,
    Has anyone played around with the various better known compilers on
 Gentoo? By better known, I'm referring to gcc, Intel, llvm, pathscale. My
 situation is that I've just started my PhD which requires me to do Finite
 Element Analysis, FEA, and Computational Fluid Dynamics, CFD, and I want to
 find the best compiler for the job. Before anyone says Why bother, XXX
 compiler is only 1 - 2% faster than gcc, in the context of the work I'm
 doing this 1 - 2% IS important.

 What I'm looking for is any feedback people may have on ability to compile
 the Gentoo environment, the ability to change compilers easily, gcc-config
 or flags in make.conf, as to whether the compiler/linker can use the
 libraries as compiled by gcc on a standard gentoo install and so on.
 Obviously there is much web trawling to be done to find what other people
 are saying as well.

 Any thoughts, greatly appreciated,
       Andrew Lowe



 Think CUDA

Yes. And as a convenient side-effect, it offers a great excuse to
upgrade your video card with some regularity. The performance of
mid-grade and high-grade video cards continues to improve rapidly.

-- 
:wq



Re: [gentoo-user] Changing compilers

2012-03-19 Thread Michael Mol
On Mon, Mar 19, 2012 at 10:05 AM, Andrew Lowe a...@wht.com.au wrote:
 On 03/19/12 20:34, Mark Knecht wrote:
 On Mon, Mar 19, 2012 at 5:32 AM, Mark Knecht markkne...@gmail.com wrote:
 On Sun, Mar 18, 2012 at 10:26 PM, Andrew Lowe a...@wht.com.au wrote:
 Hi all,
    Has anyone played around with the various better known compilers on
 Gentoo? By better known, I'm referring to gcc, Intel, llvm, pathscale. My
 situation is that I've just started my PhD which requires me to do Finite
 Element Analysis, FEA, and Computational Fluid Dynamics, CFD, and I want to
 find the best compiler for the job. Before anyone says Why bother, XXX
 compiler is only 1 - 2% faster than gcc, in the context of the work I'm
 doing this 1 - 2% IS important.

 What I'm looking for is any feedback people may have on ability to compile
 the Gentoo environment, the ability to change compilers easily, gcc-config
 or flags in make.conf, as to whether the compiler/linker can use the
 libraries as compiled by gcc on a standard gentoo install and so on.
 Obviously there is much web trawling to be done to find what other people
 are saying as well.

 Any thoughts, greatly appreciated,
       Andrew Lowe



 Think CUDA

 Mark

 Sorry. Meant to include this reference: $15 on Kindle. Reads great on
 Kindle for PC.

 http://www.amazon.com/CUDA-Example-Introduction-General-Purpose-ebook/dp/B003VYBOSE/ref=sr_1_4?ie=UTF8qid=1332160431sr=8-4



        I'm sorry but I'm doing a PhD, not creating a career in Academia. The
 concept of writing an FEA or CFD from scratch, with CUDA is laughable, I
 just don't have the time to learn CUDA, research the field, small
 displacement, large displacement, dynamics, material nonlinearities,
 write the code, and then most importantly benchmark it to make sure it's
 actually correct. This is all bearing in mind that I have 20+ years
 experience as a C/C++ technical software developer, including FEA and
 CFD. I'll actually be using Code Aster, an open source FEA code that
 runs under Linux.

        Sorry if I sound narky, but compilers is the subject at hand, not how
 to write FEA code.

If you really care about a 1-2% difference, you should not be
dismissing GPGPU-accelerated code so easily! If the tools you seem to
have already settled on don't support it, you should either use
different tools, or correct the ones you're working with.

The lead Python guy had an astute observation (which I'll generalize)
the other day; for 99% of your program, it doesn't matter what
programming language you use. For the 1% where you need speed, you
should call out into the faster language.

-- 
:wq



Re: [gentoo-user] Changing compilers

2012-03-19 Thread Michael Mol
On Mon, Mar 19, 2012 at 10:18 AM, Andrew Lowe a...@wht.com.au wrote:
 On 03/19/12 22:02, Michael Mol wrote:
 On Mon, Mar 19, 2012 at 8:32 AM, Mark Knecht markkne...@gmail.com wrote:
 On Sun, Mar 18, 2012 at 10:26 PM, Andrew Lowe a...@wht.com.au wrote:
 Hi all,
    Has anyone played around with the various better known compilers on
 Gentoo? By better known, I'm referring to gcc, Intel, llvm, pathscale. My
 situation is that I've just started my PhD which requires me to do Finite
 Element Analysis, FEA, and Computational Fluid Dynamics, CFD, and I want to
 find the best compiler for the job. Before anyone says Why bother, XXX
 compiler is only 1 - 2% faster than gcc, in the context of the work I'm
 doing this 1 - 2% IS important.

 What I'm looking for is any feedback people may have on ability to compile
 the Gentoo environment, the ability to change compilers easily, gcc-config
 or flags in make.conf, as to whether the compiler/linker can use the
 libraries as compiled by gcc on a standard gentoo install and so on.
 Obviously there is much web trawling to be done to find what other people
 are saying as well.

 Any thoughts, greatly appreciated,
       Andrew Lowe



 Think CUDA

 Yes. And as a convenient side-effect, it offers a great excuse to
 upgrade your video card with some regularity. The performance of
 mid-grade and high-grade video cards continues to improve rapidly.


        Sorry, can't do that, I'm using epic,

 http://tinyurl.com/83l5o3z

 which currently ranks at 151 in the top 500 list :) It's amazing how
 fast this list changes, 6 months ago, this machine was at 107 and 6
 months before that 87.

That does change things a bit. I don't know Epic's structure or their
upgrade plans, but if you're confident it's not going to have GPGPU
capabilities, then CUDA and OpenCL are less useful for you. OpenCL, at
least, still handles per-CPU and per-node job dispatching, though. And
that's still likely to be useful for performing on huge matrices.

To answer your original question: No, I haven't done much with
anything other than gcc on Gentoo. What you *should* do is grab each
compiler (trial versions, if necessary) and test them to find which
gives you the best results. It's my understanding PhD programs involve
getting things done right, not so much quickly or easily. Best to be
methodical about it.

-- 
:wq



Re: [gentoo-user] [HEADS UP] udev-181

2012-03-19 Thread Michael Mol
On Mon, Mar 19, 2012 at 10:56 AM, walt w41...@gmail.com wrote:
 I just had a bit of a scare after updating to udev-181, but
 all is well now, finally.  (I hope :)

 In addition to the separate /usr problem that has already
 been discussed at length here, there are other important
 changes in udev-181 to be aware of.

 First, I had to add two new items to my kernel config:
 CONFIG_DEV_TMPFS (which I thought I'd had for years but didn't)
 and CONFIG_TMPFS_POSIX_ACL.  I also elected to make the devfs
 automounting, but I don't think that was really necessary.

 Second, don't forget like I did to update the udev initscripts
 with etc-update or your machine won't be able to find the udev
 files in their new locations (just like mine didn't) and none
 of your kernel modules will auto-load, either.

 Oh, and of course you need to pre-mount /usr before udev starts
 if you have a separate /usr partition -- but you already knew
 that ;)

Is there an ENOTICE warning in the ebuild to hit people over the head
with these?

Also, how trivial would it be to have the ebuild check the running
kernel config (if available under /proc or wherever) for the necessary
config options?

-- 
:wq



Re: [gentoo-user] The End Is Near ... or, get the vaseline, they're on the way!

2012-03-18 Thread Michael Mol
On Sat, Mar 17, 2012 at 11:57 PM, Bruce Hill, Jr.
da...@happypenguincomputers.com wrote:



 On March 17, 2012 at 8:43 PM Mark Knecht markkne...@gmail.com wrote:

 snip
 initramfs side of things. I did have to use one to bring up my server
 with / on a RAID6, not because I needed it long term but in the short
 term I couldn't determine how mdadm was numbering the RAID so that I
 could get grub.conf correct. I'm somehow a bot worried something is
 going to slip by the devs and I'd be better off having an initramfs
 already running on the box when I do allow the upgrades.

 Planning on giving Dracut a try.

 Thanks,
 Mark



 The real short of this is that if you use 0.90 superblocks, and /boot on
 it's own little partition, your kernel can assembly your
 RAIDwhateverlevel without an initrd image. You will reboot with the
 /dev/md0 you created as /dev/md0. And unless you have partitions (or is it
 single drives) over 2TB, you can use metadata=0.90.

 As they say, Works For Me (R).

 I've yet to read a simple explanation of HOW-TO do this in a Gentoo doc
 (not that it doesn't exist), but you can follow this very simple
 README_RAID used in Slackware to build them on Gentoo:

 http://slackware.oregonstate.edu/slackware64-current/README_RAID.TXT

I recall reading on this list a week or two ago that kernel
autoassembly of 0.90 arrays was deprecated. :(

-- 
:wq



Re: [gentoo-user] The End Is Near ... or, get the vaseline, they're on the way!

2012-03-18 Thread Michael Mol
On Sun, Mar 18, 2012 at 3:26 AM, Bruce Hill, Jr.
da...@happypenguincomputers.com wrote:



 On March 18, 2012 at 2:30 AM Michael Mol mike...@gmail.com wrote:

 On Sat, Mar 17, 2012 at 11:57 PM, Bruce Hill, Jr.
 da...@happypenguincomputers.com wrote:
 
 
 
  On March 17, 2012 at 8:43 PM Mark Knecht markkne...@gmail.com wrote:
 
  snip
  initramfs side of things. I did have to use one to bring up my server
  with / on a RAID6, not because I needed it long term but in the short
  term I couldn't determine how mdadm was numbering the RAID so that I
  could get grub.conf correct. I'm somehow a bot worried something is
  going to slip by the devs and I'd be better off having an initramfs
  already running on the box when I do allow the upgrades.
 
  Planning on giving Dracut a try.
 
  Thanks,
  Mark
 
 
 
  The real short of this is that if you use 0.90 superblocks, and /boot
 on
  it's own little partition, your kernel can assembly your
  RAIDwhateverlevel without an initrd image. You will reboot with the
  /dev/md0 you created as /dev/md0. And unless you have partitions (or is
 it
  single drives) over 2TB, you can use metadata=0.90.
 
  As they say, Works For Me (R).
 
  I've yet to read a simple explanation of HOW-TO do this in a Gentoo doc
  (not that it doesn't exist), but you can follow this very simple
  README_RAID used in Slackware to build them on Gentoo:
 
  http://slackware.oregonstate.edu/slackware64-current/README_RAID.TXT

 I recall reading on this list a week or two ago that kernel
 autoassembly of 0.90 arrays was deprecated. :(

 --
 :wq


 Works on my computers.

And mine. But 'deprecated' means 'this may go away in the future'.

-- 
:wq



Re: [gentoo-user] mdev for udev substitution instructions web page is up

2012-03-18 Thread Michael Mol
On Sun, Mar 18, 2012 at 3:29 AM, Walter Dnes waltd...@waltdnes.org wrote:
 On Sat, Mar 17, 2012 at 11:37:49PM +0100, Sebastian Pipping wrote
 On 03/17/2012 03:51 AM, Walter Dnes wrote:
  The page will be permanently under construction, i.e. evolving as
  we find out more about how mdev works.

 Unless you want to maintain total control of the data flow I would
 suggest turning that page into a new wiki page at
 https://wiki.gentoo.org/ to ease contribution to others and to
 increase availability and accessibility of that content.

  Probably the best thing to do in the long run. I've never done any
 wiki editing/posting, so I'll take a day or 2 to read up on it.  The
 help page looks rather complex.  Or are there any volunteers here who
 can copy the contents of the web page to a wiki page?  I'll gladly
 change my webpage to a pointer to the wiki page.

First stab: https://wiki.gentoo.org/wiki/Mdev

I don't know the Gentoo wiki's style policies, but I do know MediaWiki
reasonably well. I did take license to edit for language/linguistic
style, but not for substantive comment. Please go through and correct
anything that seems broken. It's almost 4:30AM where I am, and I
really ought to be sleeping, so I almost certainly mussed something
up. Pretty sure I switched styles about halfway through, too.

You should find the MediaWiki syntax reasonably accessible; just click
'edit' at the top of the page and compare what the 'raw' form looks
like with what the results look like.


  On a tangent, is that a better tutorial on wiki pages anywhere?

MediaWiki.org has the best content. Generally, you'd play around in a
sandbox page of your own (on whatever wiki site you have an account
on) to get a feel for things.

-- 
:wq



Re: [gentoo-user] mdev for udev substitution instructions web page is up

2012-03-18 Thread Michael Mol
On Sun, Mar 18, 2012 at 4:38 AM, Michael Mol mike...@gmail.com wrote:
 On Sun, Mar 18, 2012 at 3:29 AM, Walter Dnes waltd...@waltdnes.org wrote:
 On Sat, Mar 17, 2012 at 11:37:49PM +0100, Sebastian Pipping wrote
 On 03/17/2012 03:51 AM, Walter Dnes wrote:
  The page will be permanently under construction, i.e. evolving as
  we find out more about how mdev works.

 Unless you want to maintain total control of the data flow I would
 suggest turning that page into a new wiki page at
 https://wiki.gentoo.org/ to ease contribution to others and to
 increase availability and accessibility of that content.

  Probably the best thing to do in the long run. I've never done any
 wiki editing/posting, so I'll take a day or 2 to read up on it.  The
 help page looks rather complex.  Or are there any volunteers here who
 can copy the contents of the web page to a wiki page?  I'll gladly
 change my webpage to a pointer to the wiki page.

 First stab: https://wiki.gentoo.org/wiki/Mdev

 I don't know the Gentoo wiki's style policies, but I do know MediaWiki
 reasonably well. I did take license to edit for language/linguistic
 style, but not for substantive comment. Please go through and correct
 anything that seems broken. It's almost 4:30AM where I am, and I
 really ought to be sleeping, so I almost certainly mussed something
 up. Pretty sure I switched styles about halfway through, too.

 You should find the MediaWiki syntax reasonably accessible; just click
 'edit' at the top of the page and compare what the 'raw' form looks
 like with what the results look like.


  On a tangent, is that a better tutorial on wiki pages anywhere?

 MediaWiki.org has the best content. Generally, you'd play around in a
 sandbox page of your own (on whatever wiki site you have an account
 on) to get a feel for things.

BTW, where would one go to get involved in organization of the wiki? I
found myself wishing for templates for consistent formatting of things
like files, one-liners and naming of ebuilds, but I don't think I
ought to simply create the templates I'm looking for without talking
with someone first.

-- 
:wq



Re: [gentoo-user] mdev for udev substitution instructions web page is up

2012-03-18 Thread Michael Mol
On Sun, Mar 18, 2012 at 12:05 PM, Sebastian Pipping sp...@gentoo.org wrote:
 On 03/18/2012 09:42 AM, Michael Mol wrote:
 BTW, where would one go to get involved in organization of the wiki? I
 found myself wishing for templates for consistent formatting of things
 like files, one-liners and naming of ebuilds, but I don't think I
 ought to simply create the templates I'm looking for without talking
 with someone first.

 For file content and kernel config I have seen templates up there
 already.  I'm not sure what you mean by one-liners and naming of ebuilds.

 Please get in touch with these people:

  http://www.gentoo.org/proj/en/wiki/

 You can reach all of them on alias wiki at g.o.

I'm in #gentoo-wiki, now, and have been asked if I had permission to
copy Walt's page. So...Walt, did I have permission to copy your page?

-- 
:wq



Re: [gentoo-user] Re: systemd? [ Was: The End Is Near ... ]

2012-03-18 Thread Michael Mol
On Sun, Mar 18, 2012 at 3:25 PM, Canek Peláez Valdés can...@gmail.com wrote:
 On Sun, Mar 18, 2012 at 5:23 AM, Pandu Poluan pa...@poluan.info wrote:

 On Mar 18, 2012 3:52 PM, Canek Peláez Valdés can...@gmail.com wrote:

 If the config file doesn't exists, the service will not start, and you
 can check the reason why with

 systemctl status sshd.service

 And of course you can set another mini sevice unit file to create the
 hostkeys. But I repeat: I think those tasks belong into the package
 manager, no the init script.


 Between installation by package manager and actual execution by the init
 system, things might happen on the required file(s). Gentoo's initscript
 guards against this possibility *plus* providing helpful error messages in
 /var/rc.log

 Or, said configuration files might be corrupted; the OpenRC initscript -- if
 written defensively -- will be able to detect that and (perhaps) fallback to
 something sane. systemd can't do that, short of putting all required
 intelligence into a script which it executes on boot.

 That is a completely valid point, but I don't think that task belongs
 into the init system. The init system starts and stops services, and
 monitors them; checking for configuration files and creating hostkeys
 is part of the installation process. If something got corrupted
 between installation time and now, I would prefer my init system not
 to start a service; just please tell me that something is wrong.

 However, it's of course debatible. I agree with systemd's behavior;
 it's cleaner, more elegant, and it follows the Unix tradition: do one
 thing, and doing it right.

I like and see benefit to the systemd approach, honestly, but I don't
think it necessarily follows to say that that belongs in the
installation process, since it shouldn't be the responsibility of the
init process.

The way things sit currently, Gentoo doesn't default to adding new
services to any runlevel, and in the process of setting up or
reconfiguring a system, services may be added, removed, then possibly
added again. Having a service's launch script perform one-time checks
makes perfect sense in this regard. It's lazy evaluation; you don't do
non-trivial work until you know it needs to be done. (And generating a
2048-bit or 4096-bit SSH key certainly qualifies as non-trivial work!)

Also, I think the code golf argument is a poor one; how many lines
something does to meet some particular goal ignores any other intended
goals the compared object also meets. When you're comparing apples to
apples, the argument is fine. When you're comparing apples to oranges,
the argument is weakened; they're both fruits, but they still have
different purposes in the larger context.

In this case, I think the happy medium would be for systemd to start a
service-provided launch script, which performs whatever additional
checks are wanted or desired. Either way, it's the responsibility of
whoever maintains the package for that service.


-- 
:wq



Re: [gentoo-user] mdev for udev substitution instructions web page is up

2012-03-18 Thread Michael Mol
On Sun, Mar 18, 2012 at 3:57 PM, Walter Dnes waltd...@waltdnes.org wrote:
 On Sun, Mar 18, 2012 at 01:06:00PM -0400, Michael Mol wrote

 I'm in #gentoo-wiki, now, and have been asked if I had permission to
 copy Walt's page. So...Walt, did I have permission to copy your page?

  Yes you did.  My previous email should have beem enough.  If they want
 explicit permission, copy this email to them.  As I mentioned in my
 previous email, I'll be re-directing the web page to point to the wiki.
 Thanks very much for your work.

Hey, I haven't been able to help test, but wiki editing is part of my
hobby. It's the least I could do. I'll just hand them a link to your
email when it pops up on archives.gentoo.org. Hey, there it is!

http://archives.gentoo.org/gentoo-user/msg_acc1deaf79fd8f1536add7c5c2daf24a.xml

-- 
:wq



Re: [gentoo-user] Re: LVM, /usr and really really bad thoughts.

2012-03-15 Thread Michael Mol
On Thu, Mar 15, 2012 at 10:09 AM, Mike Edenfield kut...@kutulu.org wrote:
 From: Dale [mailto:rdalek1...@gmail.com]

 This has been one of my points too.  I could go out and buy me a bluetooth
 mouse/keyboard but I don't because it to complicates matters.

 I had a long reply to Walt that I (probably wisely) decided not to send, but
 the basic point of it is also relevant here. My response to his (IMO
 needlessly aggressive) email was basically this:

 Why *shouldn't I* be able to go but a Bluetooth keyboard and mouse if I
 wanted to? Those things *work perfectly fine with udev*. And why wouldn't I
 want to use the *same* solution for all of my various machines, even if that
 solution is overkill for half of them? Just because my laptop doesn't need
 bluetoothd support in udev doesn't mean using udev there *is bad*. (I don't
 need 80% of what's in the Linux kernel but I still install one...)

I wouldn't say you shouldn't be able to. (Outside that I think
Bluetooth is a pile of smelly carp, people shouldn't have to bend over
backwards to support, but that's a different issue...)


 I am not in any way denigrating the work he's doing. I think it's awesome
 and I've tried to help where I can. But I'm pretty fed up with people like
 him acting as if the current udev solution is the end of the world. I've
 heard it called everything from design mistake to out of control truck
 full of manure.

design mistake is a perfectly reasonable description, and I'd agree
with that. It's also not pejorative, but I'd say the two vocal sides
of the issue are far too polarized to notice that. truck full of
manure is probably a bit far, but that description only holds if
important things which shouldn't need a dependency on udev gain or
keep them. Rather like how installing a console Qt app on a Debian
server pulls in X.


 I have three PCs in my home running Gentoo. Two of them would boot correctly
 using Walt's new solution (mdev and no /usr mounted at boot) and one would
 not. *All three of them* boot correctly using udev. 100% success  66%
 success, so clearly the udev solution is a perfectly legitimate solution to
 a real world problem. At work, those numbers are likely different, and
 Walt's solution might be a working approach -- if udev didn't already work
 fine in 100% of those cases, too.

Sure.


 Instead of asking why everyone else should be forced to use the udev
 solution *that already works*, you should be focusing on explaining to
 everyone else the reasons why it is worth the time and effort to configure
 *something different* for those same machines.

There's little use in explaining to someone why they should use
something apart from what they're comfortable with. Moving out of a
comfort zone requires personal motivation, not external. If udev works
for someone, they should use it. If they discover udev is getting in
their way, then they should look for alternatives.

I use apache2+squid3 on my server, despite hordes of people telling me
I should use nginx. Apache+squid works appropriately well for my
circumstance.

  There was a reason why people
 stopped using static /dev, and devfs; maybe there is a reason why people
 should stop using udev, but thus far that reason seems to be initramfs
 makes us cranky.

*That* is a matter of systemic complexity and maintenance difficulty;
the increased complexity tickles the spider senses of anyone who's had
to design, develop or maintain very complex systems with few
leave-alone black boxes. It's very difficult to increase the
complexity of a system without adding bugs or mistakes anywhere from
code to testing procedures to package management to end-user
maintenance. So when a system starts becoming more complex, and I'm
told that I'm going to have to go along for the ride, I get concerned.
Before Walt started pulling mdev from being a busybox-only component,
that was exactly the scenario. (Thank you, Walt!)

The only cases I've ever conceivably needed to use an initramfs have
been where I needed a kernel module available early. Rather than build
that as a module and build an initramfs, I simply build it into the
kernel. Certainly, there are portions of the kernel (particularly some
sound cards) where that doesn't work, and if someone needs those
portions available early, then an initramfs is going to be the tool
for them.


 There's no need to get mean-spirited just because you choose a different
 audience that freedesktop.org as the target for your solution.

That's really not the reason for it. I mean, sure, I think the initial
reactions were mostly grumpiness and misinformed outrage, but I don't
think the contrariness really *baked* in until people got a twofer of
you're going to use udev unless you write the code to get around it
and oh, you're writing the code? You're wasting your time and you're
going to fail. That, I think, is when the real malaise set in.

 It just makes
 you look petty and childish. Produce an alternative to
 udev/initramfs/single root that 

Re: [gentoo-user] Re: gmail smtp overwrites the sender

2012-03-15 Thread Michael Mol
On Thu, Mar 15, 2012 at 10:29 AM, Grant Edwards
grant.b.edwa...@gmail.com wrote:
 On 2012-03-14, Mick michaelkintz...@gmail.com wrote:
 On Monday 12 Mar 2012 18:34:37 Grant Edwards wrote:
 On 2012-03-12, Stroller strol...@stellar.eclipse.co.uk wrote:

 No, I simply meant that if you use Postfix you don't have to use
 anyone else's SMTP server,

 If you've got a static IP address, a domain, an MX record, and
 whatever other requirements a lot of sites are now placing upon
 senders of mail.

 I used to use my own SMTP server, 10 years ago it worked fine.  More
 recently, too many destinations wouldn't accept mail from me -- so I
 had to start using mail relays.

 Perhaps your mail address was blacklisted? Many ISPs IP address
 blocks are blacklisted these days.

 I know that was sometimes the case from the rejection message sent by
 the destination SMTP server.  Even though I had a static IP address
 and an valid MX entry for the sending machine's hostname, some sites
 wouldn't accept mail because my static IP addres was in a block used
 for DSL customers (of which I was one).

Yeah, I can't even send email to my gmail account from my Comcast
public IPv4 address.


 Also some ISPs are blocking ports (like 25 and 2525) to minimise spam
 sent out of compromised boxen.  They would typically allow you to
 relay through their mailservers though.

 I've never run into that, but I know people who have.

 In either case, I wouldn't advise anybody to try using their own SMTP
 server to deliver mail directly to destinations unless they have their
 own domain, their own IP block, and the time+skills require to fight
 with the problems.  Anybody with the requisite resources and skills
 probably wouldn't be asking questions here about how to use Gmail's
 SMTP server.

My workaround involved relaying my network's outgoing email through my
VPS node's email server. (My VPS provider, prgmr.com, doesn't seem to
be on any blocklists, etc.)

-- 
:wq



Re: [gentoo-user] How can I trigger kernel panic?

2012-03-15 Thread Michael Mol
On Thu, Mar 15, 2012 at 12:55 PM, Jarry mr.ja...@gmail.com wrote:
 On 14-Mar-12 19:41, ZHANG, Le wrote:


     So my question is: Can I somehow deliberately trigger
     kernel panic (or kernel oops)?

 For panic, echo c  /proc/sysrq-trigger


 After I issued the above mentioned command, my system
 instantly froze to death. Nothing changed on screen,
 no kernel panic or Ooops screen. Just frozen...

 No reaction to keyboard or mouse. No auto-reboot either.
 The only thing I could do is to press Reset. Not exactly
 what I have been expecting...

Were you running under X? The panic would have killed X, which
wouldn't have released control over the video hardware.

There's a SysRq sequence to get around this, but I don't remember it.

-- 
:wq



Re: [gentoo-user] How can I trigger kernel panic?

2012-03-15 Thread Michael Mol
On Thu, Mar 15, 2012 at 3:17 PM, Mick michaelkintz...@gmail.com wrote:
 On Thursday 15 Mar 2012 17:02:15 Michael Mol wrote:
 On Thu, Mar 15, 2012 at 12:55 PM, Jarry mr.ja...@gmail.com wrote:
  On 14-Mar-12 19:41, ZHANG, Le wrote:
      So my question is: Can I somehow deliberately trigger
      kernel panic (or kernel oops)?
 
  For panic, echo c  /proc/sysrq-trigger
 
  After I issued the above mentioned command, my system
  instantly froze to death. Nothing changed on screen,
  no kernel panic or Ooops screen. Just frozen...
 
  No reaction to keyboard or mouse. No auto-reboot either.
  The only thing I could do is to press Reset. Not exactly
  what I have been expecting...

 Were you running under X? The panic would have killed X, which
 wouldn't have released control over the video hardware.

 There's a SysRq sequence to get around this, but I don't remember it.

 Ctrl+Alt+

 R E I S U B

 (busier in reverse)

 After a E or I you should be back into a console, unless things are badly
 screwed.

Is that Ctrl+Alt+SysRq+(R E I S U B), or is the SysRq key not actually used?



-- 
:wq



Re: [gentoo-user] Re: gmail smtp overwrites the sender

2012-03-15 Thread Michael Mol
On Thu, Mar 15, 2012 at 3:54 PM, Mick michaelkintz...@gmail.com wrote:
 On Thursday 15 Mar 2012 14:51:10 Michael Mol wrote:
 On Thu, Mar 15, 2012 at 10:29 AM, Grant Edwards

 grant.b.edwa...@gmail.com wrote:
  On 2012-03-14, Mick michaelkintz...@gmail.com wrote:

  Perhaps your mail address was blacklisted? Many ISPs IP address
  blocks are blacklisted these days.
 
  I know that was sometimes the case from the rejection message sent by
  the destination SMTP server.  Even though I had a static IP address
  and an valid MX entry for the sending machine's hostname, some sites
  wouldn't accept mail because my static IP addres was in a block used
  for DSL customers (of which I was one).

 Yeah, I can't even send email to my gmail account from my Comcast
 public IPv4 address.

 Have you tried using port 587?  Comcast should accept relaying on that port
 IIRC with your customer username/passwd.

Researched that, but I ultimately didn't go that route because I
couldn't find any good documentation on the appropriate settings.


 Or are you saying that Google will not accept incoming mail from Comcast
 addresses/IP blocks?

Not saying that; to my knowledge, Gmail accepts relay through
Comcast's relay points, but I haven't tested that. I've only tested
direct connections.

-- 
:wq



Re: [gentoo-user] Re: gmail smtp overwrites the sender

2012-03-15 Thread Michael Mol
On Thu, Mar 15, 2012 at 5:13 PM, Mick michaelkintz...@gmail.com wrote:
 On Thursday 15 Mar 2012 20:07:54 Michael Mol wrote:
 On Thu, Mar 15, 2012 at 3:54 PM, Mick michaelkintz...@gmail.com wrote:

  Have you tried using port 587?  Comcast should accept relaying on that
  port IIRC with your customer username/passwd.

 Researched that, but I ultimately didn't go that route because I
 couldn't find any good documentation on the appropriate settings.

 OK, have a look at this in case it helps.

  http://www.linuxha.com/other/sendmail/index.html

Cool beans. I'm not likely to change things (for now), but I'll
remember where I saw it, if I need it. :)
-- 
:wq



Re: [gentoo-user] Beta test Gentoo with mdev instead of udev; version 5 - failure :-(

2012-03-14 Thread Michael Mol
On Wed, Mar 14, 2012 at 11:20 AM, Tanstaafl tansta...@libertytrek.org wrote:
 On 2012-03-13 8:07 PM, Canek Peláez Valdés can...@gmail.com wrote:

 You want it simple? Tha'ts fine, it is possible. It's just that it
 will not solve the general problem, just a very specific subset of it.
 Just as mdev is doing; Walt just posted an email explaining that if
 you use GNOME, KDE, XFCE, or LVM2, mdev is not for you.


 Very interesting thread guys, and thanks for keeping it relatively civil
 despite the passion behind the objections being raised...

 I just wanted to point out one thing (and ask a question about it) to anyone
 who argues that servers don't need this - if LVM2 really does eliminate the
 possibility of using mdev for fundamental reasons (as opposed to arbitrary
 decisions), that rules out a *lot* of server installations.

 So, that is my question... what is it about LVM2 that *requires* udev?

 Or asked another way -

 Why is LVM2 incapable od using mdev?

The presumption is that lvm's dependent binaries would be found
somewhere under a mount point other than / (such as /usr), which gives
you a chicken-and-egg problem if mounting that mount point requires
lvm.

-- 
:wq



Re: [gentoo-user] Beta test Gentoo with mdev instead of udev; version 5 - failure :-(

2012-03-14 Thread Michael Mol
On Wed, Mar 14, 2012 at 1:22 PM, Canek Peláez Valdés can...@gmail.com wrote:
 On Wed, Mar 14, 2012 at 9:16 AM, Alan Mackenzie a...@muc.de wrote:
 Hello, Canek

 On Tue, Mar 13, 2012 at 06:07:32PM -0600, Canek Peláez Valdés wrote:
 On Tue, Mar 13, 2012 at 5:03 PM, Alan Mackenzie a...@muc.de wrote:

  The new hardware will just work if there are the correct drivers
 built in.  That's as true of udev as it is of mdev as it is of the old
 static /dev with mknod.

 No, it is not. You are letting out the sine qua non of the matter: the
 device has to be built, *and the /dev file should exists*. I hope you
 are not suggesting that we put *ALL* the possible files under /dev,
 because that was the idea before devfs, and it doesn't work *IN
 GENERAL*.

 Previously you made appropriate /dev entries with mknod, giving the
 device major and minor numbers as parameters.  This appeared to work in
 general - I'm not aware of any device it didn't work for.

 Again, I believe you are not following me. In *general* the number of
 potential device files under /dev is not bounded. Given N device
 filess, I can give you an example where you would need N+1 device
 files. With your experience, I assume you know about huge arrays of
 SCSI disks, for example; add to that whatever number of USB devices
 (and the hubs necessary to connect them), whatever number of Bluetooth
 thingies, etc., etc.

  Therefore, mknod doesn't solve the problem in general, because I can
 always give an example where the preset device files on  /dev are not
 enough.

And I can always give an example where there can't be enough inodes
(or perhaps even RAM) to contain enough device nodes. General Case
is a beautiful thing for a theoretical system, but my computer is not
a theoretical system. Neither is my phone, or my server.


 So, you need something to handle device files on /dev, so you don't
 need every possible device file for every possible piece of hardware.
 But then you want to handle the same device with the same device name,
 so you need some kind of database. Then for the majority of users,
 they want to see *something* happen when they connect aa piece of
 hardware to their computers.

 That happened under the old static /dev system.  What was that /dev
 system, if not a database matching /dev names to device numbers?  I'm not
 sure what you mean by same device and same device name.

 That if I connect a USB wi-fi dongle, and it appears with the name
 wlan23, I want *every* time that dongle to have the wlan23 name .Good
 luck doing that without a database.

udev does something nice here, and I believe mdev is capable of
similar. sysfs exports device attributes such as model and serial
number, and you could trivially derive the node name from that.


 So you need to handle the events associated with the connections (or
 discovery, for things like Bluetooth) of the devices, and since udev is
 already handling the database and the detection of
 connections/discovery, I agree with the decision of leting udev to
 execute programs when something gets connected. You could get that
 function in another program, but you are only moving the problem, *and
 it can also happen very early at boot time*, so lets udev handle it all
 the time.

 Early in boot time, you only need things like disk drives, graphic cards
 and keyboards.  Anything else can be postponed till late boot time.

 Bluetooth keyboards. Done, you made my system unbootable when I need
 to run fsck by hand after a power failure.

The userland bluetooth stack is an abomination. (Actually, the _whole_
bluetooth stack is an abomination. You don't want to know what
embedded developers who build car stereos and the like have to go
through to try to test things. Or what real packets fundamentally look
like 'on the wire'.)

It needs a real overhaul. I used to use a bluetooth keyboard, but I
found it to be a real mess. I even joined the Linux Documentation
Project with the intent of getting bluetooth profiles, apps and stacks
indexed and cross-referenced, but there's just way too much going
wrong with bluetooth. I eventually switched to using a propriety
wireless keyboard, and relegated the bluetooth keyboard to secondary
access and to controlling the PS3.

Besides, your BIOS isn't going to support bluetooth, either; if you're
concerned about failure-case recovery, and you don't have a keyboard
you can navigate your BIOS with, you're very probably doing something
wrong.


 I hope you see where I'm going. As I said before, mdev could (in
 theory) do the same that udev does. But then it will be as complicated
 as udev, *because it is a complicated problem* in general. And I again
 use my fuel injection analogy: it is not *necessary*. It is just very
 damn convenient.

 It may be a complicated problem in general, but many people do not need
 that generality.

 ^ That's your mistake! (IMHO). I explain below.

 I suspect the vast majority don't need it.  Neither the
 typical desktop, the typical 

Re: [gentoo-user] Beta test Gentoo with mdev instead of udev; version 5 - failure :-(

2012-03-14 Thread Michael Mol
On Wed, Mar 14, 2012 at 2:45 PM, Canek Peláez Valdés can...@gmail.com wrote:
 On Wed, Mar 14, 2012 at 12:09 PM, Michael Mol mike...@gmail.com wrote:
 On Wed, Mar 14, 2012 at 1:22 PM, Canek Peláez Valdés can...@gmail.com 
 wrote:
 On Wed, Mar 14, 2012 at 9:16 AM, Alan Mackenzie a...@muc.de wrote:
 Hello, Canek

 On Tue, Mar 13, 2012 at 06:07:32PM -0600, Canek Peláez Valdés wrote:
 On Tue, Mar 13, 2012 at 5:03 PM, Alan Mackenzie a...@muc.de wrote:

  The new hardware will just work if there are the correct drivers
 built in.  That's as true of udev as it is of mdev as it is of the old
 static /dev with mknod.

 No, it is not. You are letting out the sine qua non of the matter: the
 device has to be built, *and the /dev file should exists*. I hope you
 are not suggesting that we put *ALL* the possible files under /dev,
 because that was the idea before devfs, and it doesn't work *IN
 GENERAL*.

 Previously you made appropriate /dev entries with mknod, giving the
 device major and minor numbers as parameters.  This appeared to work in
 general - I'm not aware of any device it didn't work for.

 Again, I believe you are not following me. In *general* the number of
 potential device files under /dev is not bounded. Given N device
 filess, I can give you an example where you would need N+1 device
 files. With your experience, I assume you know about huge arrays of
 SCSI disks, for example; add to that whatever number of USB devices
 (and the hubs necessary to connect them), whatever number of Bluetooth
 thingies, etc., etc.

  Therefore, mknod doesn't solve the problem in general, because I can
 always give an example where the preset device files on  /dev are not
 enough.

 And I can always give an example where there can't be enough inodes
 (or perhaps even RAM) to contain enough device nodes. General Case
 is a beautiful thing for a theoretical system, but my computer is not
 a theoretical system. Neither is my phone, or my server.

 You are arguing that the mknod method should be used? Because that
 dicussion happened ten years ago; that train is long gone. If you want
 to argue with someone about it, it would not be me.

No, I was taking your argument to its perceived end result. You want
the universal solution, but that requires limitless resources in
things like memory and integer sizes. The software doesn't exist
within such an environment. The assumptions which it's already
depending on limit its utility in lower-end hardware.




 So, you need something to handle device files on /dev, so you don't
 need every possible device file for every possible piece of hardware.
 But then you want to handle the same device with the same device name,
 so you need some kind of database. Then for the majority of users,
 they want to see *something* happen when they connect aa piece of
 hardware to their computers.

 That happened under the old static /dev system.  What was that /dev
 system, if not a database matching /dev names to device numbers?  I'm not
 sure what you mean by same device and same device name.

 That if I connect a USB wi-fi dongle, and it appears with the name
 wlan23, I want *every* time that dongle to have the wlan23 name .Good
 luck doing that without a database.

 udev does something nice here, and I believe mdev is capable of
 similar. sysfs exports device attributes such as model and serial
 number, and you could trivially derive the node name from that.

 I think (as does the udev maintainers) that there should be a strong
 coupling between the device manager, the database handling, and the
 firing of scripts. Otherwise. we get back to devfs, which again, that
 train is long gone.

From the sound of it, it sounds like mdev matches that description.
mdev supports the renaming of devices, so there's your database. It
supports firing scripts.



 So you need to handle the events associated with the connections (or
 discovery, for things like Bluetooth) of the devices, and since udev is
 already handling the database and the detection of
 connections/discovery, I agree with the decision of leting udev to
 execute programs when something gets connected. You could get that
 function in another program, but you are only moving the problem, *and
 it can also happen very early at boot time*, so lets udev handle it all
 the time.

 Early in boot time, you only need things like disk drives, graphic cards
 and keyboards.  Anything else can be postponed till late boot time.

 Bluetooth keyboards. Done, you made my system unbootable when I need
 to run fsck by hand after a power failure.

 The userland bluetooth stack is an abomination. (Actually, the _whole_
 bluetooth stack is an abomination. You don't want to know what
 embedded developers who build car stereos and the like have to go
 through to try to test things. Or what real packets fundamentally look
 like 'on the wire'.)

 It needs a real overhaul. I used to use a bluetooth keyboard, but I
 found it to be a real mess. I even joined the Linux

Re: [gentoo-user] hard drive encryption

2012-03-13 Thread Michael Mol
On Tue, Mar 13, 2012 at 12:11 PM, Florian Philipp li...@binarywings.net wrote:
 Am 13.03.2012 12:55, schrieb Valmor de Almeida:
 On 03/11/2012 02:29 PM, Florian Philipp wrote:
 Am 11.03.2012 16:38, schrieb Valmor de Almeida:

 Hello,

 I have not looked at encryption before and find myself in a situation
 that I have to encrypt my hard drive. I keep /, /boot, and swap outside
 LVM, everything else is under LVM. I think all I need to do is to
 encrypt /home which is under LVM. I use reiserfs.

 I would appreciate suggestion and pointers on what it is practical and
 simple in order to accomplish this task with a minimum of downtime.

 Thanks,

 --
 Valmor



 Is it acceptable for you to have a commandline prompt for the password
 when booting? In that case you can use LUKS with the /etc/init.d/dmcrypt

 I think so.

 init script. /etc/conf.d/dmcrypt should contain some examples. As you
 want to encrypt an LVM volume, the lvm init script needs to be started
 before this. As I see it, there is no strict dependency between those
 two scripts. You can add this by adding this line to /etc/rc.conf:
 rc_dmcrypt_after=lvm

 For creating a LUKS-encrypted volume, look at
 http://en.gentoo-wiki.com/wiki/DM-Crypt

 Currently looking at this.


 You won't need most of what is written there; just section 9,
 Administering LUKS and the kernel config in section 2, Assumptions.

 Concerning downtime, I'm not aware of any solution that avoids copying
 the data over to the new volume. If downtime is absolutely critical, ask
 and we can work something out that minimizes the time.

 Regards,
 Florian Philipp


 Since I am planning to encrypt only home/ under LVM control, what kind
 of overhead should I expect?

 Thanks,


 What do you mean with overhead? CPU utilization? In that case the
 overhead is minimal, especially when you run a 64-bit kernel with the
 optimized AES kernel module.

Rough guess: Latency. With encryption, you can't DMA disk data
directly into a process's address space, because you need the decrypt
hop.

Try running bonnie++ on encrypted vs non-encrypted volumes. (Or not; I
doubt you have the time and materials to do a good, meaningful set of
time trials)

-- 
:wq



Re: [gentoo-user] hard drive encryption

2012-03-13 Thread Michael Mol
On Tue, Mar 13, 2012 at 12:49 PM, Florian Philipp li...@binarywings.net wrote:
 Am 13.03.2012 17:26, schrieb Michael Mol:
 On Tue, Mar 13, 2012 at 12:11 PM, Florian Philipp li...@binarywings.net 
 wrote:
 Am 13.03.2012 12:55, schrieb Valmor de Almeida:
 On 03/11/2012 02:29 PM, Florian Philipp wrote:
 Am 11.03.2012 16:38, schrieb Valmor de Almeida:

 Hello,

 I have not looked at encryption before and find myself in a situation
 that I have to encrypt my hard drive. I keep /, /boot, and swap outside
 LVM, everything else is under LVM. I think all I need to do is to
 encrypt /home which is under LVM. I use reiserfs.

 I would appreciate suggestion and pointers on what it is practical and
 simple in order to accomplish this task with a minimum of downtime.

 Thanks,

 --
 Valmor



 Is it acceptable for you to have a commandline prompt for the password
 when booting? In that case you can use LUKS with the /etc/init.d/dmcrypt

 I think so.

 init script. /etc/conf.d/dmcrypt should contain some examples. As you
 want to encrypt an LVM volume, the lvm init script needs to be started
 before this. As I see it, there is no strict dependency between those
 two scripts. You can add this by adding this line to /etc/rc.conf:
 rc_dmcrypt_after=lvm

 For creating a LUKS-encrypted volume, look at
 http://en.gentoo-wiki.com/wiki/DM-Crypt

 Currently looking at this.


 You won't need most of what is written there; just section 9,
 Administering LUKS and the kernel config in section 2, Assumptions.

 Concerning downtime, I'm not aware of any solution that avoids copying
 the data over to the new volume. If downtime is absolutely critical, ask
 and we can work something out that minimizes the time.

 Regards,
 Florian Philipp


 Since I am planning to encrypt only home/ under LVM control, what kind
 of overhead should I expect?

 Thanks,


 What do you mean with overhead? CPU utilization? In that case the
 overhead is minimal, especially when you run a 64-bit kernel with the
 optimized AES kernel module.

 Rough guess: Latency. With encryption, you can't DMA disk data
 directly into a process's address space, because you need the decrypt
 hop.


 Good call. Wouldn't have thought of that.

 Try running bonnie++ on encrypted vs non-encrypted volumes. (Or not; I
 doubt you have the time and materials to do a good, meaningful set of
 time trials)


 Yeah, that sounds like something for which you need a very dull winter
 day. Besides, I've already lost a poorly cooled HDD on a benchmark.

Sounds like something we can do at my LUG at one of our weekly
socials. The part I don't know is how to set this kind of thing up and
how to tune it; I don't want it to be like Microsoft's comparison of
SQL Server against MySQL from a decade or so ago, where they didn't
tune MySQL for their bench workload.

-- 
:wq



Re: [gentoo-user] hard drive encryption

2012-03-13 Thread Michael Mol
On Tue, Mar 13, 2012 at 2:06 PM, Florian Philipp li...@binarywings.net wrote:
 Am 13.03.2012 18:45, schrieb Frank Steinmetzger:
 On Tue, Mar 13, 2012 at 05:11:47PM +0100, Florian Philipp wrote:

 Since I am planning to encrypt only home/ under LVM control, what kind
 of overhead should I expect?

 What do you mean with overhead? CPU utilization? In that case the
 overhead is minimal, especially when you run a 64-bit kernel with the
 optimized AES kernel module.

 Speaking of that...
 I always wondered what the exact difference was between AES and AES i586. I
 can gather myself that it's about optimisation for a specific architecture.
 But which one would be best for my i686 Core 2 Duo?

 From what I can see in the kernel sources, there is a generic AES
 implementation using nothing but portable C code and then there is
 aes-i586 assembler code with aes_glue C code.


 So I assume the i586
 version is better for you --- unless GCC suddenly got a lot better at
 optimizing code.

Since when, exactly? GCC isn't the best compiler at optimization, but
I fully expect current versions to produce better code for x86-64 than
hand-tuned i586. Wider registers, more registers, crypto acceleration
instructions and SIMD instructions are all very nice to have. I don't
know the specifics of AES, though, or what kind of crypto algorithm it
is, so it's entirely possible that one can't effectively parallelize
it except in some relatively unique circumstances.

-- 
:wq



Re: [gentoo-user] emerge Break

2012-03-13 Thread Michael Mol
On Tue, Mar 13, 2012 at 2:57 PM, siefke_lis...@web.de
siefke_lis...@web.de wrote:
 Hello,

 i try to install avidemux and so i give emerge avidemux. But at
 media-libs/aften-0.0.8 break emerge with the message:

 error
 cmake: error while loading shared libraries: libnettle.so.3:
 cannot open shared object file: No such file or directory
 /error

 But the libnettle.so.3 is present on my system:
 siefke@gentoo-desk ~ $ locate libnettle.so.3
 /usr/lib/libnettle.so.3
 /usr/lib/libnettle.so.3.0

 I try with env-update but nothing change. Has someone a idea?

I don't know a whole lot about multilib, but I believe /usr/lib is a
32-bit library folder. Perhaps avidemux is looking for a 64-bit
version?

Just started emerging avidemux on one of my boxes, but libnettle
doesn't appear to get pulled in. Finally, My emerge result line reads:

[ebuild  N ] media-video/avidemux-2.5.4-r2  USE=aac aften alsa
dts jack libsamplerate mp3 nls qt4 sdl truetype vorbis x264 xv xvid
-amr (-esd) -gtk -oss -pulseaudio LINGUAS=-bg -ca -cs -de -el -es
-fr -it -ja -pt_BR -ru -sr -sr@latin -tr -zh_TW 17,730 kB

-- 
:wq



Re: [gentoo-user] hard drive encryption

2012-03-13 Thread Michael Mol
On Tue, Mar 13, 2012 at 2:58 PM, Florian Philipp li...@binarywings.net wrote:
 Am 13.03.2012 19:18, schrieb Michael Mol:
 On Tue, Mar 13, 2012 at 2:06 PM, Florian Philipp li...@binarywings.net 
 wrote:
 Am 13.03.2012 18:45, schrieb Frank Steinmetzger:
 On Tue, Mar 13, 2012 at 05:11:47PM +0100, Florian Philipp wrote:

 Since I am planning to encrypt only home/ under LVM control, what kind
 of overhead should I expect?

 What do you mean with overhead? CPU utilization? In that case the
 overhead is minimal, especially when you run a 64-bit kernel with the
 optimized AES kernel module.

 Speaking of that...
 I always wondered what the exact difference was between AES and AES i586. I
 can gather myself that it's about optimisation for a specific architecture.
 But which one would be best for my i686 Core 2 Duo?

 From what I can see in the kernel sources, there is a generic AES
 implementation using nothing but portable C code and then there is
 aes-i586 assembler code with aes_glue C code.


 So I assume the i586
 version is better for you --- unless GCC suddenly got a lot better at
 optimizing code.

 Since when, exactly? GCC isn't the best compiler at optimization, but
 I fully expect current versions to produce better code for x86-64 than
 hand-tuned i586. Wider registers, more registers, crypto acceleration
 instructions and SIMD instructions are all very nice to have. I don't
 know the specifics of AES, though, or what kind of crypto algorithm it
 is, so it's entirely possible that one can't effectively parallelize
 it except in some relatively unique circumstances.


 One sec. We are talking about an Core2 Duo running in 32bit mode, right?
 That's what the i686 reference in the question meant --- or at least,
 that's what I assumed.

I think you're right; I missed that part.


 If we talk about 32bit mode, none of what you describe is available.
 Those additional registers and instructions are not accessible with i686
 instructions. A Core 2 also has no AES instructions.

 Of course, GCC could make use of what it knows about the CPU, like
 number of parallel pipelines, pipeline depth, cache size, instructions
 added in i686 and so on. But even then I doubt it can outperform
 hand-tuned assembler, even if it is for a slightly older instruction set.

I'm still not sure why. I'll posit that some badly-written C could
place constraints on the compiler's optimizer, but GCC should have
little problem handling well-written C, separating semantics from
syntax and finding good transforms of the original code to get
proofably-same results. Unless I'm grossly overestimating the
capabilities of its AST processing and optimization engine.


 If instead we are talking about an Core 2 Duo running in x86_64 mode, we
 should be talking about the aes-x86_64 module instead of the aes-i586
 module and that makes use of the complete instruction set of the Core 2,
 including SSE2.

FWIW, SSE2 is available on 32-bit processors; I have code in the field
using SSE2 on Pentium 4s.

-- 
:wq



Re: [gentoo-user] hard drive encryption

2012-03-13 Thread Michael Mol
On Tue, Mar 13, 2012 at 3:07 PM, Stroller
strol...@stellar.eclipse.co.uk wrote:

 On 13 March 2012, at 18:18, Michael Mol wrote:
 ...
 So I assume the i586
 version is better for you --- unless GCC suddenly got a lot better at
 optimizing code.

 Since when, exactly? GCC isn't the best compiler at optimization, but
 I fully expect current versions to produce better code for x86-64 than
 hand-tuned i586. Wider registers, more registers, crypto acceleration
 instructions and SIMD instructions are all very nice to have. I don't
 know the specifics of AES, though, or what kind of crypto algorithm it
 is, so it's entirely possible that one can't effectively parallelize
 it except in some relatively unique circumstances.

 Do you have much experience of writing assembler?

 I don't, and I'm not an expert on this, but I've read the odd blog article on 
 this subject over the years.

Similar level of experience here. I can read it, even debug it from
time to time. A few regular bloggers on the subject are like candy.
And I used to have pagetable.org, Ars's Technopaedia and specsheets
for early x86 and motorola processors memorized. For the past couple
years, I've been focusing on reading blogs of language and compiler
authors, academics involved in proofing, testing and improving them,
etc.


 What I've read often has the programmer looking at the compiled gcc bytecode 
 and examining what it does. The compiler might not care how many registers it 
 uses, and thus a variable might find itself frequently swapped back into RAM; 
 the programmer does not have any control over the compiler, and IIRC some 
 flags reserve a register for degugging (IIRC -fomit-frame-pointer disables 
 this). I think it's possible to use registers more efficiently by swapping 
 them (??) or by using bitwise comparisons and other tricks.

Sure; it's cheaper to null out a register by XORing it with itself
than setting it to 0.


 Assembler optimisation is only used on sections of code that are at the core 
 of a loop - that are called hundreds or thousands (even millions?) of times 
 during the program's execution. It's not for code, such as reading the 
 .config file or initialisation, which is only called once. Because the code 
 in the core of the loop is called so often, you don't have to achieve much of 
 an optimisation for the aggregate to be much more considerable.

Sure; optimize the hell out of the code where you spend most of your
time. I wasn't aware that gcc passed up on safe optimization
opportunities, though.


 The operations in question may only be constitute a few lines of C, or a 
 handful of machine operations, so it boils down to an algorithm that a human 
 programmer is capable of getting a grip on and comprehending. Whilst 
 compilers are clearly more efficient for large programs, on this micro scale, 
 humans are more clever and creative than machines.

I disagree. With defined semantics for the source and target, a
computer's cleverness is limited only by the computational and memory
expense of its search algorithms. Humans get through this by making
habit various optimizations, but those habits become less useful as
additional paths and instructions are added. As system complexity
increases, humans operate on personally cached techniques derived from
simpler systems. I would expect very, very few people to be intimately
familiar with the the majority of optimization possibilities present
on an amdfam10 processor or a core2. Compiler's aren't necessarily
familiar with them, either; they're just quicker at discovering them,
given knowledge of the individual instructions and the rules of
language semantics.


 Encryption / decryption is an example of code that lends itself to this kind 
 of optimisation. In particular AES was designed, I believe, to be amenable to 
 implementation in this way. The reason for that was that it was desirable to 
 have it run on embedded devices and on dedicated chips. So it boils down to a 
 simple bitswap operation (??) - the plaintext is modified by the encryption 
 key, input and output as a fast stream. Each byte goes in, each byte goes 
 out, the same function performed on each one.

I'd be willing to posit that you're right here, though if there isn't
a per-byte feedback mechanism, SIMD instructions would come into
serious play. But I expect there's a per-byte feedback mechanism, so
parallelization would likely come in the form of processing
simultaneous streams.


 Another operation that lends itself to assembler optimisation is video 
 decoding - the video is encoded only once, and then may be played back 
 hundreds or millions of times by different people. The same operations must 
 be repeated a number of times on each frame, then c 25 - 60 frames are 
 decoded per second, so at least 90,000 frames per hour. Again, the smallest 
 optimisation is worthwhile.

Absolutely. My position, though, is that compilers are quicker and
more capable of discovering optimization

Re: [gentoo-user] hard drive encryption

2012-03-13 Thread Michael Mol
On Tue, Mar 13, 2012 at 3:30 PM, Florian Philipp li...@binarywings.net wrote:
 Am 13.03.2012 20:13, schrieb Michael Mol:
 On Tue, Mar 13, 2012 at 2:58 PM, Florian Philipp li...@binarywings.net 
 wrote:
 Am 13.03.2012 19:18, schrieb Michael Mol:
 On Tue, Mar 13, 2012 at 2:06 PM, Florian Philipp li...@binarywings.net 
 wrote:
 Am 13.03.2012 18:45, schrieb Frank Steinmetzger:
 On Tue, Mar 13, 2012 at 05:11:47PM +0100, Florian Philipp wrote:

 Since I am planning to encrypt only home/ under LVM control, what kind
 of overhead should I expect?

 What do you mean with overhead? CPU utilization? In that case the
 overhead is minimal, especially when you run a 64-bit kernel with the
 optimized AES kernel module.

 Speaking of that...
 I always wondered what the exact difference was between AES and AES 
 i586. I
 can gather myself that it's about optimisation for a specific 
 architecture.
 But which one would be best for my i686 Core 2 Duo?

 From what I can see in the kernel sources, there is a generic AES
 implementation using nothing but portable C code and then there is
 aes-i586 assembler code with aes_glue C code.


 So I assume the i586
 version is better for you --- unless GCC suddenly got a lot better at
 optimizing code.

 Since when, exactly? GCC isn't the best compiler at optimization, but
 I fully expect current versions to produce better code for x86-64 than
 hand-tuned i586. Wider registers, more registers, crypto acceleration
 instructions and SIMD instructions are all very nice to have. I don't
 know the specifics of AES, though, or what kind of crypto algorithm it
 is, so it's entirely possible that one can't effectively parallelize
 it except in some relatively unique circumstances.


 One sec. We are talking about an Core2 Duo running in 32bit mode, right?
 That's what the i686 reference in the question meant --- or at least,
 that's what I assumed.

 I think you're right; I missed that part.


 If we talk about 32bit mode, none of what you describe is available.
 Those additional registers and instructions are not accessible with i686
 instructions. A Core 2 also has no AES instructions.

 Of course, GCC could make use of what it knows about the CPU, like
 number of parallel pipelines, pipeline depth, cache size, instructions
 added in i686 and so on. But even then I doubt it can outperform
 hand-tuned assembler, even if it is for a slightly older instruction set.

 I'm still not sure why. I'll posit that some badly-written C could
 place constraints on the compiler's optimizer, but GCC should have
 little problem handling well-written C, separating semantics from
 syntax and finding good transforms of the original code to get
 proofably-same results. Unless I'm grossly overestimating the
 capabilities of its AST processing and optimization engine.


 Well, it's not /that/ good. Otherwise the Firefox ebuild wouldn't need a
 profiling run to allow the compiler to predict loop and jump certainties
 and so on.

I was thinking more in the context of simple functions and
mathematical operations. Loop probabilities? Yeah, that's a tough one.
Nobody wants to stall a huge CPU pipeline. I remember when the
NetBurst architecture came out. Intel cranked up the amount of die
space dedicated to branch prediction...


 But, by all means, let's test it! It's not like we cannot.
 Unfortunately, I don't have a 32bit Gentoo machine at hand where I could
 test it right now.

Now we're talking. :)

Unfortunately, I don't have a 32-bit Gentoo environment available,
either. Actually, I've never run Gentoo in a 32-bit envrionment. .

-- 
:wq



Re: [gentoo-user] Re: LVM, /usr and really really bad thoughts.

2012-03-12 Thread Michael Mol
On Mon, Mar 12, 2012 at 2:23 PM, Jorge Martínez López jorg...@gmail.com wrote:
 Hi!

 2012/3/11 walt w41...@gmail.com:
 On 03/11/2012 05:16 AM, Jorge Martínez López wrote:
 Hi!

 Hi Jorge.

 I had some struggle with a separate /usr on top of LVM

 I'm just curious why you use a separate /usr, and why you are
 willing to struggle to keep it that way.  Several people have
 posted opinions here in recent months, but I don't recall that
 you are one of them.

 I believe that by the time I installed Gentoo it was recommended on
 the installation handbook. I did not give it much thought. I believed
 back then that thanks to LVM I could always grow and shrink my
 partitions as needed.

 If I had to do it again I would probably go the btrfs route (once they
 get fsck working).

 Regarding the whole /usr discussion, I trust the developers to know
 what they are doing better than I do and I did not find any serious
 flaw on their reasoning. It took me just a couple of hours to get the
 initrd working, so I did it and moved on. On the other hand I can
 understand some people disagree. I do not have a problem with that.

Don't forget you're using Gentoo; you're implicitly not very far
removed from the skill levels of the developers themselves.


-- 
:wq



Re: [gentoo-user] Re: LVM, /usr and really really bad thoughts.

2012-03-12 Thread Michael Mol
On Mon, Mar 12, 2012 at 2:39 PM, Bruce Hill, Jr.
da...@happypenguincomputers.com wrote:



 On March 12, 2012 at 2:30 PM Michael Mol mike...@gmail.com wrote:

 Don't forget you're using Gentoo; you're implicitly not very far
 removed from the skill levels of the developers themselves.


 --
 :wq


 Maybe you're not, but it only takes me a few minutes being around chithead
 and NeddySeagoon for me to realize I ain't gotta Gentoo clue!

Point is, most people I've seen in here know a lot more than most[1],
and are generally intelligent enough to overcome any limitation that
isn't fundamentally philosophical in origin (see mdev vs udev, ALSA vs
OSS4 vs PulseAudio, lvm vs mdraid vs physical raid vs btrfs vs zfs).

So don't sell yourself too short, and don't blindly trust the opinions
and decisions of others; they're not always as right as assume them to
be, and managing to get Gentoo working suggests you have some right to
point out when the emperor's not wearing any clothes.


[1] I like to surround myself with people smarter or more
knowledgeable than I am about things, and I hit the motherload in this
list...
-- 
:wq



Re: [gentoo-user] photo viewer other than gthumb?

2012-03-08 Thread Michael Mol
I typically use geeqie.

On Thu, Mar 8, 2012 at 9:20 PM, Grant emailgr...@gmail.com wrote:
 Can anyone recommend a photo browser/viewer other than gthumb which is
 in portage or an overlay?

 - Grant




-- 
:wq



Re: [gentoo-user] photo viewer other than gthumb?

2012-03-08 Thread Michael Mol
On Thu, Mar 8, 2012 at 10:14 PM, Daddy da...@happypenguincomputers.com wrote:



 On March 8, 2012 at 9:20 PM Grant emailgr...@gmail.com wrote:

 Can anyone recommend a photo browser/viewer other than gthumb which is
 in portage or an overlay?

 - Grant


 media-gfx/gqview

gqview became geeqie, FWIW. I don't recall the full story, but IIRC,
gqview stagnated, and geeqie is a fork.

-- 
:wq



Re: [gentoo-user] netcat - which?

2012-03-07 Thread Michael Mol
On Wed, Mar 7, 2012 at 2:03 AM, Pandu Poluan pa...@poluan.info wrote:
 eix netcat returned net-analyzer/gnu-netcat and net-analyzer/netcat

 What's the difference? Which one should I emerge?

Dunno. FWIW, I'm using net-analyzer/netcat6


-- 
:wq



Re: [gentoo-user] [OT] FOSS history books

2012-03-06 Thread Michael Mol
On Tue, Mar 6, 2012 at 1:18 PM, Claudio Roberto França Pereira
spide...@gmail.com wrote:
 I've ordered Rebel Code: Linux And The Open Source Revolution, from
 Glyn Moody, but I just realized it's pretty old, from January 2001, 11
 years ago.
 Of course I'll love reading it, but I'd like to complete this study
 with the 2000 developments of FOSS. Linux 2.6, HAL life-cycle, GCC
 evolution, Ubuntu creation, Mozilla history, Google rising like a
 rocket. Anyone know a good recommendation in this subject?

It's possible someone still needs to write one.

-- 
:wq



Re: [gentoo-user] LVM: Removing 3 disks and replacing with 1

2012-03-06 Thread Michael Mol
On Tue, Mar 6, 2012 at 1:34 PM, Stroller strol...@stellar.eclipse.co.uk wrote:

 On 6 March 2012, at 17:25, Neil Bothwick wrote:

 On Tue, 6 Mar 2012 16:32:56 +, Stroller wrote:

 … I initially want to replace the 3x1TBs with a single 3TB drive but
 i've never removed/replaced a drive in an LVM setup before. I think I
 understand how it is done, using pvmove …

 Or you could just format and mount the new drive and use `cp`.

 Thereby instantly removing the benefits of LVM and making it almost
 impossible to extend the space by adding another drive when needed.

 Uh, why not create a new volume group with the new disk?

 OP says he wants to *replace* the old disks.

Makes a certain amount of sense. But it sounds like he found a tool
(pvmove) to do what he needs to do, and trying to do more on top
complicates things beyond what they need to be.



 Admittedly, I don't like the RAID0 nature of LVM, so my question probably did 
 reflect that cynicism.

I typically put LVM on top of RAID.


-- 
:wq



Re: [gentoo-user] Photo management programs

2012-03-05 Thread Michael Mol
On Sun, Mar 4, 2012 at 3:41 PM, Dale rdalek1...@gmail.com wrote:
 Michael Mol wrote:
 So I take a lot of pictures. A *lot* of pictures. Sometimes around
 500/month, sometimes twice that if I manage to get out more. I've got
 a large number of 'DCIM' directories from different cameras, different
 camera models, etc, going back ten years. Sometimes in JPG, sometimes
 RAW, sometimes both.

 And I've never really managed them well.

 Does anyone have any photo management tool they like? I've got bits of
 Qt and Gtk installed already, and while I'd prefer to avoid pulling in
 a full desktop environment, I might--if the tool is good enough. It
 would have to:

 * Handle RAW (via libraw or dcraw is fine), JPEG, PNG[1] and TIFF[1]
 content and metadata
 * Index by metadata, including things like the recording camera's
 serial number[2]
 * Not be destructive, or ambiguous about being destructive, on image
 import. I tried using Amarok to organize my music, which is in similar
 disarray, and I was never sure if it was being destructive about the
 source files/folders. So I made copies. Which ultimately added to the
 disarray.


 [1] My postprocessing occasionally winds up in lossless formats like these.
 [2] My fiancee and I have the same model camera, and occasionally need
 to share memory cards, so I'd like to be able to use serial number to
 distinguish whose is whose.



 As someone who also takes a LOT of pictures at times, I don't use
 software, I just use directories.  Mine starts out like this:  Camera
 directory  Year  subject matter  image  That works for me.  I used to
 not have the year but that ends up with a LOT of pictures in a
 directory.  Example of mine as it goes to a actual image:

 Camera-pics/2012/New Years/2012-01-05-8.JPG

 I have been using gtkam to download my pics for years.  Thing is, it has
 a bug up its butt and wants to crash at random times, usually when
 changing the directories.  Anyway, it always crashes before I am done
 and lets just say it gets on my freaking nerves.  So, I tried digikam.
 Well, my camera has multiple directories and for some reason it doesn't
 show them all and then duplicates other images to boot.  I may have 2 or
 3 copies of the same picture.  I have yet to figure out why that is and
 google, now startpage, has not helped me either.  Maybe I am searching
 for the wrong thing?

 If you want software to help manage your images, I'd try digikam.  If it
 works for you and your camera, it should do fine.  If you want to go my
 route, try gtkam and hope like heck it doesn't crash for you too.  Right
 now, both of those get on my nerves for different reasons.

 Hope that helps and is clearer than mud.  Maybe someone will come along
 with a better plan for us both too.  lol

Based on this and other posts in the thread, I'll probably give
digikam a try. I did want to clarify one point, though: I don't
connect the camera to the computer; I put the SD card into a card
reader, and copy from there.

-- 
:wq



[gentoo-user] Color profiles and colorspace awareness

2012-03-05 Thread Michael Mol
What's the current state of color space awareness and color profiles
on Linux? I'm at the point where I'd be willing to spring for a
calibration kit to calibrate my monitors, which did not themselves
come with color profiles...except all the kits I've seen seem to
assume Windows.

I'd like to move towards having a full color-aware workflow for both
my LDR and HDR (DEF-JPG, DEF-hugin-stitch-JPG) work, and I'd like
to do it on Linux. I don't want to ask these kinds of questions in
more photography-oriented environments, because I'm tired of being
peppered with Just use a Mac, Just use Windows and Just use
Photoshop responses. I know Gimp and ufraw support colorspace
profiles, I've seen the USE flags in portage, it's just the inputs and
outputs of my workflow which I don't yet see how to cover.

Slightly offtopic: I'd love to be able to set up a color profile for
my Pentax K-x's raw output, too. Anyone aware of a

-- 
:wq



Re: [gentoo-user] Color profiles and colorspace awareness

2012-03-05 Thread Michael Mol
On Mon, Mar 5, 2012 at 11:58 AM, Paul Hartman
paul.hartman+gen...@gmail.com wrote:
 On Mon, Mar 5, 2012 at 10:33 AM, Michael Mol mike...@gmail.com wrote:
 What's the current state of color space awareness and color profiles
 on Linux? I'm at the point where I'd be willing to spring for a
 calibration kit to calibrate my monitors, which did not themselves
 come with color profiles...except all the kits I've seen seem to
 assume Windows.

 media-gfx/argyllcms supports many kinds of hardware calibration
 devices and lets you do color management in gentoo. Their website has
 a list of the supported hardware:
 http://www.argyllcms.com/doc/instruments.html

 I have not used it myself, but hopefully that's enough to get you
 Googling for some case studies :)

That's exactly the kind of lead I needed, thanks!


-- 
:wq



Re: [gentoo-user] Photo management programs

2012-03-05 Thread Michael Mol
On Mon, Mar 5, 2012 at 12:04 PM, Dale rdalek1...@gmail.com wrote:
 Michael Mol wrote:

 Based on this and other posts in the thread, I'll probably give
 digikam a try. I did want to clarify one point, though: I don't
 connect the camera to the computer; I put the SD card into a card
 reader, and copy from there.



 It is a nice program and I'm pretty sure it allows you to download from
 your card too.  I'm not sure gtkam will allow downloads from the card so
 you are likely headed down the right road.

Well, I use scp to move the files from machines with with card readers
to the machines I do processing. If digikam has any kind of 'import'
support, that'd do it.


 Honestly, if digikam worked right with my camera, I'd use it in a heart
 beat.  I like it but I can't get my pics to show up right.  I can't
 figure out why tho.  Maybe I should try getting from the stick like you
 do?  Thing is, I leave my camera on the tri-pod about 90% of the time.
 The card is on the bottom of mine beside the battery.

Check out the Eye-Fi?

http://www.eye.fi/

When I first heard about it, someone had just gotten a receiving
daemon written in Python to work with it.

-- 
:wq



Re: [gentoo-user] Re: Awesome WM, io.popen() attempt to index io nil value

2012-03-04 Thread Michael Mol
I use AwesomeWM, but I haven't messed with the Lua side of things. You
might try in #awesome on Freenode.

On Sun, Mar 4, 2012 at 2:13 PM, trevor donahue donahue.tre...@gmail.com wrote:
 anyone?

 On Thu, Mar 1, 2012 at 3:17 PM, trevor donahue donahue.tre...@gmail.com
 wrote:

 http://www.lua.org/manual/5.1/manual.html#5.7
 the doc
 also found on other resources scripts using IO not io, that still aint
 working...


 On Thu, Mar 1, 2012 at 3:14 PM, trevor donahue donahue.tre...@gmail.com
 wrote:

 Hi folks,
 is anyone of you using awesome wm?
 I've been struggling with a little bit of a problem lately, wanted to
 create a widget that retrieves gmail data using curl. The problem
 encountered is the function io.popen() that returns nil [attempt to index io
 nil value] (as having an error in lua) even though not doing anything
 special, tested also with ls -l and other trivial bash commands...

 Can somebody help me resolve the problem?




 --
 Thanks,
 Donahue Trevor




 --
 Thanks,
 Donahue Trevor




-- 
:wq



Re: [gentoo-user] Re: Awesome WM, io.popen() attempt to index io nil value

2012-03-04 Thread Michael Mol
Er. #awesome on OFTC apparently has more users.

On Sun, Mar 4, 2012 at 2:28 PM, Michael Mol mike...@gmail.com wrote:
 I use AwesomeWM, but I haven't messed with the Lua side of things. You
 might try in #awesome on Freenode.

 On Sun, Mar 4, 2012 at 2:13 PM, trevor donahue donahue.tre...@gmail.com 
 wrote:
 anyone?

 On Thu, Mar 1, 2012 at 3:17 PM, trevor donahue donahue.tre...@gmail.com
 wrote:

 http://www.lua.org/manual/5.1/manual.html#5.7
 the doc
 also found on other resources scripts using IO not io, that still aint
 working...


 On Thu, Mar 1, 2012 at 3:14 PM, trevor donahue donahue.tre...@gmail.com
 wrote:

 Hi folks,
 is anyone of you using awesome wm?
 I've been struggling with a little bit of a problem lately, wanted to
 create a widget that retrieves gmail data using curl. The problem
 encountered is the function io.popen() that returns nil [attempt to index 
 io
 nil value] (as having an error in lua) even though not doing anything
 special, tested also with ls -l and other trivial bash commands...

 Can somebody help me resolve the problem?




 --
 Thanks,
 Donahue Trevor




 --
 Thanks,
 Donahue Trevor




 --
 :wq



-- 
:wq



[gentoo-user] Photo management programs

2012-03-04 Thread Michael Mol
So I take a lot of pictures. A *lot* of pictures. Sometimes around
500/month, sometimes twice that if I manage to get out more. I've got
a large number of 'DCIM' directories from different cameras, different
camera models, etc, going back ten years. Sometimes in JPG, sometimes
RAW, sometimes both.

And I've never really managed them well.

Does anyone have any photo management tool they like? I've got bits of
Qt and Gtk installed already, and while I'd prefer to avoid pulling in
a full desktop environment, I might--if the tool is good enough. It
would have to:

* Handle RAW (via libraw or dcraw is fine), JPEG, PNG[1] and TIFF[1]
content and metadata
* Index by metadata, including things like the recording camera's
serial number[2]
* Not be destructive, or ambiguous about being destructive, on image
import. I tried using Amarok to organize my music, which is in similar
disarray, and I was never sure if it was being destructive about the
source files/folders. So I made copies. Which ultimately added to the
disarray.


[1] My postprocessing occasionally winds up in lossless formats like these.
[2] My fiancee and I have the same model camera, and occasionally need
to share memory cards, so I'd like to be able to use serial number to
distinguish whose is whose.

-- 
:wq



[gentoo-user] rng-tools

2012-03-03 Thread Michael Mol
So I've been making extensive use of rngd on one of my Debian servers,
and I wanted to make use of it on a couple of my Gentoo boxes. Only to
find out that two parameters I need, -T and -R, aren't available, even
when I unmask version '3' in portage. Even the manpage contains
'FIXME' where the Debian manpage contains much more information.

The version of the rng-tools Debian package I'm using is
'2-unofficial-mt.14-1~60squeeze1'. Are the -T and -R parameters unique
to Debian, or is the Gentoo package simply out of date?

-- 
:wq



Re: [gentoo-user] bind-9.8.1_p1 recompilation failed...

2012-03-01 Thread Michael Mol
On Thu, Mar 1, 2012 at 12:55 PM, Jarry mr.ja...@gmail.com wrote:
 Hi,
 I just updated portage tree and as a result of that bind wanted
 to be recompilled (the only difference is -static-libs%):

 [ebuild   R    ] net-dns/bind-9.8.1_p1  USE=berkdb dlz idn ipv6 ssl urandom
 -caps -doc -geoip -gost -gssapi -ldap -mysql -odbc -pkcs11 -postgres -rpz
 -sdb-ldap (-selinux) -static-libs% -threads -xml 0 kB

 But compilation failed with these messages:

 ==
 libtool: compile:  x86_64-pc-linux-gnu-gcc
 -I/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1
 -I/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1/lib/dns/include
 -I../../../../lib/dns/include
 -I/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1/lib/isc/include
 -I../../../../lib/isc -I../../../../lib/isc/include
 -I../../../../lib/isc/unix/include -I../../../../lib/isc/nothreads/include
 -I../../../../lib/isc/x86_32/include -I/usr/include -D_GNU_SOURCE
 -march=athlon64 -O2 -pipe -I/usr/include/db4.8 -fPIC -W -Wall
 -Wmissing-prototypes -Wcast-qual -Wwrite-strings -Wformat -Wpointer-arith
 -fno-strict-aliasing -c driver.c  -fPIC -DPIC -o .libs/driver.o
 x86_64-pc-linux-gnu-gcc -shared -o driver.so driver.o
 x86_64-pc-linux-gnu-gcc: driver.o: No such file or directory
 x86_64-pc-linux-gnu-gcc: no input files
 make[4]: *** [driver.so] Error 1
 make[4]: Leaving directory
 `/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1/bin/tests/system/dlzexternal'
 make[3]: *** [subdirs] Error 1
 make[3]: Leaving directory
 `/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1/bin/tests/system'
 make[2]: *** [subdirs] Error 1
 make[2]: Leaving directory
 `/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1/bin/tests'
 make[1]: *** [subdirs] Error 1
 make[1]: Leaving directory
 `/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1/bin'
 make: *** [subdirs] Error 1

  * ERROR: net-dns/bind-9.8.1_p1 failed (compile phase):
  *   emake failed
  *
  * If you need support, post the output of 'emerge --info
 =net-dns/bind-9.8.1_p1',
  * the complete build log and the output of 'emerge -pqv
 =net-dns/bind-9.8.1_p1'.
  * The complete build log is located at
 '/var/tmp/portage/net-dns/bind-9.8.1_p1/temp/build.log'.
  * The ebuild environment file is located at
 '/var/tmp/portage/net-dns/bind-9.8.1_p1/temp/environment'.
  * S: '/var/tmp/portage/net-dns/bind-9.8.1_p1/work/bind-9.8.1-P1'

 Failed to emerge net-dns/bind-9.8.1_p1, Log file:
  '/var/tmp/portage/net-dns/bind-9.8.1_p1/temp/build.log'

  * Messages for package net-dns/bind-9.8.1_p1:
  * ERROR: net-dns/bind-9.8.1_p1 failed (compile phase):
  *   emake failed
 ==

 What could be the problem? I remember just yesterday I updated
 bind from 9.7.4_p1 to 9.8.1_p1, but today recompilation simply
 failed...

First guess: Parallel build error. Try:

emerge --resume

and see if it gets around it. If not, try

MAKEOPTS=-j1 emerge --resume

and see if that fixes it.

-- 
:wq



Re: [gentoo-user] [OFF] string1!string2!string3 notation

2012-02-27 Thread Michael Mol
On Mon, Feb 27, 2012 at 10:28 AM, Todd Goodman t...@bonedaddy.net wrote:
 * Claudio Roberto Fran?a Pereira spide...@gmail.com [120227 08:35]:
 I'm reading Writing Solid Code, by Steve Maguire, and at the end of
 the book there is an about the author section that mentions two
 contact addresses: one is an email, the other is
 microsoft!storm!stevem. The book is from 1993, so that should be an
 old address, for an old protocol. So what? That's not enough for my
 curiosity. Anyone does know where this came from?

 --
 Claudio Roberto França Pereira


 As others have said it's a bang path for UUCP routing.

 It was used for mail routing even when not strictly using UUCP as well.

 This was before such thing as DNS and you got to pass around host tables
 (/etc/hosts) which contained all known hosts and their IP addresses.

Predates me somewhat, but I believe UUCP operated over DUN/direct
serial without the IP layer, as well.


-- 
:wq



[gentoo-user] nvidia module, __raw_spin_lock_init

2012-02-24 Thread Michael Mol
Is anyone else able to get nvidia-drivers 290.10 to load into a kernel
from gentoo-sources 3.2.1-r2? This box has been headless for so long,
I really don't have a good baseline comparison.

When I try to load the module, I get nvidia: Unknown symbol
__raw_spin_lock_init (err 0).

-- 
:wq



[gentoo-user] Re: nvidia module, __raw_spin_lock_init

2012-02-24 Thread Michael Mol
On Fri, Feb 24, 2012 at 12:49 PM, Michael Mol mike...@gmail.com wrote:
 Is anyone else able to get nvidia-drivers 290.10 to load into a kernel
 from gentoo-sources 3.2.1-r2? This box has been headless for so long,
 I really don't have a good baseline comparison.

 When I try to load the module, I get nvidia: Unknown symbol
 __raw_spin_lock_init (err 0).

Figured out that one; I had to enable DEBUG_SPINLOCK.

Now I'm trying to figure out why I get:

[   30.650581] NVRM: Can't find an IRQ for your NVIDIA card!
[   30.650587] NVRM: Please check your BIOS settings.
[   30.650591] NVRM: [Plug  Play OS] should be set to NO
[   30.650595] NVRM: [Assign IRQ to VGA] should be set to YES
[   30.650610] nvidia: probe of :01:00.0 failed with error -1
[   30.650634] NVRM: The NVIDIA probe routine failed for 1 device(s).
[   30.650636] NVRM: None of the NVIDIA graphics adapters were initialized!

lspci -kvv shows:
01:00.0 VGA compatible controller: nVidia Corporation GT200 [GeForce
210] (rev a2) (prog-if 00 [VGA controller])
Subsystem: Micro-Star International Co., Ltd. Device 2011
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort-
TAbort- MAbort- SERR- PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 18
Region 0: Memory at fa00 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at d000 (64-bit, prefetchable) [size=256M]
Region 3: Memory at ce00 (64-bit, prefetchable) [size=32M]
Region 5: I/O ports at cc00 [size=128]
Expansion ROM at fb20 [disabled] [size=512K]
Capabilities: access denied
Kernel modules: nvidia

Any ideas?

-- 
:wq



Re: [gentoo-user] Re: Safe way to test a new kernel?

2012-02-24 Thread Michael Mol
On Fri, Feb 24, 2012 at 9:08 PM, Grant emailgr...@gmail.com wrote:
 I need to test a kernel config change on a remote system.  Is there a
 safe way to do this?  The fallback thing in grub has never worked for
 me.  When does that ever work?


 You can press ESC in the Grub screen and it will take you to text-only mode.
  There, you select an entry, press e and edit it.  Press ENTER when you're
 finished, and then press b to boot your modified entry.

 That way, you can boot whatever kernel you want if the current one doesn't
 work.

 I can't do that remotely though.  I'm probably asking for something
 that doesn't exist.

What's the nature of the remote box?

For example, I have a xen vps for which I can access the console via
ssh to the xen host machine. I can get at the grub menu that way. I
think grub supports serial consoles, but I don't know...


-- 
:wq



Re: [gentoo-user] gcc fails and then succeeds - definitely a problem?

2012-02-23 Thread Michael Mol
On Thu, Feb 23, 2012 at 2:17 PM, Grant emailgr...@gmail.com wrote:
 The gcc update just failed to compile on one of my systems with a
 segfault, but then succeeded after trying again even though I didn't
 change anything.  Does that indicate a hardware problem for sure?
 Should I run memtester?  Any other tests to run?  Nothing in dmesg.

Not definitively anything; it could have been a race condition.

Memtest if you like. prime95 is designed for CPU and memory burning,
too, and wouldn't require you to shutdown your system.

-- 
:wq



Re: [gentoo-user] gcc fails and then succeeds - definitely a problem?

2012-02-23 Thread Michael Mol
On Thu, Feb 23, 2012 at 2:28 PM, Mark Knecht markkne...@gmail.com wrote:
 On Thu, Feb 23, 2012 at 11:17 AM, Grant emailgr...@gmail.com wrote:
 The gcc update just failed to compile on one of my systems with a
 segfault, but then succeeded after trying again even though I didn't
 change anything.  Does that indicate a hardware problem for sure?
 Should I run memtester?  Any other tests to run?  Nothing in dmesg.

 - Grant


 Might be.might be nothing. Maybe a stray neutrino hit your
 processor at just the wrong instant. ;-)

 More likely i my mind is some little corner condition in the software
 running on your system. I've had the same thing happen many times
 actually, and actually a few more times since I started playing with
 your /etc/make.conf -j/-l values which push the system a little
 harder.

Whenever I get build failures with the load-adaptive MAKEOPTS and
EMERGE_DEFAULT_OPTS, I check the build log to see if it's relatively
obvious that something was depended upon before it was built. If so, I
file a bug.

Happens every month or so, for me.

-- 
:wq



Re: [gentoo-user] Re: Firefox-10.0.1 fails to compile on x86

2012-02-23 Thread Michael Mol
On Thu, Feb 23, 2012 at 2:36 PM, Nikos Chantziaras rea...@arcor.de wrote:
 On 23/02/12 12:44, Mick wrote:

 On Thursday 23 Feb 2012 10:22:40 Willie WY Wong wrote:

 On Tue, Feb 21, 2012 at 07:22:27PM -0500, Penguin Lover Philip Webb

 squawked:

 I compiled FF 10.0.1 on amd64 without any problems :
 it needed  3,61 GB  disk space for the link stage
   most/all of my  2 GB  memory.


 Argh. 3.6 diskspace and 2G memory? I guess it is finally getting to
 the point that my laptop cannot build firefox. Time to switch to the
 -bin I guess.


 I've only got something like 625M RAM and around 4G disk space (for
 var/portage).  I used 750M from that 4G for adding swap.  Eventually FF
 compiled fine.

 The irony is that older boxen which would benefit most from building from
 source are constrained in resources to achieve this and have to resort to
 installing bin packages.


 I doubt that the bin package will be slower than the one compiled from
 source.  I predict the reverse, in fact.  The bin package will perform
 better.

That seems a strange prediction. What drives that hunch?

-- 
:wq



Re: [gentoo-user] Re: Firefox-10.0.1 fails to compile on x86

2012-02-23 Thread Michael Mol
On Thu, Feb 23, 2012 at 2:55 PM, Nikos Chantziaras rea...@arcor.de wrote:
 On 23/02/12 21:42, Michael Mol wrote:

 On Thu, Feb 23, 2012 at 2:36 PM, Nikos Chantziarasrea...@arcor.de
  wrote:

 On 23/02/12 12:44, Mick wrote:

 The irony is that older boxen which would benefit most from building
 from
 source are constrained in resources to achieve this and have to resort
 to
 installing bin packages.


 I doubt that the bin package will be slower than the one compiled from
 source.  I predict the reverse, in fact.  The bin package will perform
 better.


 That seems a strange prediction. What drives that hunch?


 The PGO optimized build that Mozilla is shipping.  You can also build with
 PGO from source, but that means building FF *twice* in a row (by enabling
 the pgo USE flag).  I doubt that with the old laptop anyone is building FF
 twice with PGO, and that means that the -bin package should be faster.

 Furthermore, FF is build using its own CFLAGS.  They are the same in the
 source build as well as in the -bin package.  The only difference is
 probably the -march option.  And that doesn't make much difference to begin
 with (after -march=i686, gains are very minimal).

I knew and forgot about PGO, but I didn't realize there was a USE flag
for it. Neat. I'll be enabling that.

I disagree with the idea that keeping things down around -march=i686
provides only minimal gains. SSE and SSE2 instructions carry a big
benefit for numerical operations, especially those which can be
parallelized, but not enough to justify batching into a GPU. AVX will
be adding operations which allow more useful and flexible use of
registers. Simply bumping up to x86-64 from simple x86 doubles your
GPRs, which gives the compiler all kinds of room to work with.

If the combination of those things doesn't significantly benefit a
program written in C or C++, then I suspect there's something
dreadfully wrong with the architecture of that codebase.


-- 
:wq



Re: [gentoo-user] ebuild for a fee?

2012-02-23 Thread Michael Mol
On Thu, Feb 23, 2012 at 3:07 PM, Sebastian Pipping sp...@gentoo.org wrote:
 On 02/17/2012 04:09 AM, Grant wrote:
 I'd like to pay to have an ebuild built.  Can anyone recommend a way
 to get in touch with a good person for the job?

 ebuild doesn't equal ebuild: packaging java is different to packaging
 python software etc.  find an existing ebuild similar to what you need
 and contact it's authors.  that's what i would do.

FWIW, Diego blogged about it.

http://blog.flameeyes.eu/2012/02/19/working-outside-the-bubble

-- 
:wq



Re: [gentoo-user] gcc fails and then succeeds - definitely a problem?

2012-02-23 Thread Michael Mol
On Thu, Feb 23, 2012 at 4:00 PM, Grant emailgr...@gmail.com wrote:
  Parallel builds are not deterministic so if the Makefile allows a race
  condition to develop it's pot luck whether you'll be hit with it or
  not

 I got sick of stuff like that so I run MAKEOPTS=-j1 on all of my
 systems.

 If it were a frequent occurrence, there may be some benefit in that. But
 using only one of the CPUs 8 cores is such a waste when this sort of
 thing happens only every few weeks. Usually trying again works, rarely
 does using -j1 make a difference and when it does a bug report ensures
 that it won't be an issue in future.

 OK you've inspired me to give it another try.  So if I find a package
 that doesn't build with -jn where n  1 but does build with -j1 I
 should file a bug?

Pretty much. It can get more specific than that, but that much is
already a help.

Here's the relevant portions of my MAKEOPTS and EMERGE_DEFAULT_OPTS
which should speed things up for you about as much as possible.

MAKEOPTS=--jobs --load $n # Where $n is num_CPUs * 1.25
EMERGE_DEFAULT_OPTS=--jobs --load-average=$m # Where $m is num_CPUs * 1.5

With the --jobs parameters, I haven't needed to set $n or $m to
num_CPUS*2 to try to keep the load average up.

Here's my MAKEOPTS and EMERGE_DEFAULT_OPTS verbatim, for an eight-core machine:

MAKEOPTS=--jobs --load 10
EMERGE_DEFAULT_OPTS=--jobs --load-average=12 --verbose --tree
--with-bdeps=y --keep-going

If you want to keep things simple, just go with num_CPUs=n for both $m and $n.

-- 
:wq



Re: [gentoo-user] [Solved] Linux Kernel 3.2.0 USB Mouse

2012-02-22 Thread Michael Mol
2012/2/22 Space Cake spaceca...@gmail.com:
 Today I've tried to upgrade from 3.1.6 to 3.2.1. I did not change
 anything else only the options mentioned below

 Device Drivers
 - HID Devices (HID_SUPPORT)
 - Special HID drivers
 - Logitech devices (HID_LOGITECH)
 - Logitech Unifying receivers full support

 After that my mouse stopped working in X (yes, evdev emerged after the
 kernel upgrade).

 I had the following errors in my log

 Feb 22 17:17:55 brutal kernel: logitech-djreceiver 0003:046D:C52B.0003:
 claimed by neither input, hiddev nor hidraw
 Feb 22 17:17:55 brutal kernel: logitech-djreceiver 0003:046D:C52B.0003:
 logi_dj_probe:hid_hw_start returned error
 Feb 22 17:26:20 brutal kernel: logitech-djreceiver 0003:046D:C52B.0007:
 claimed by neither input, hiddev nor hidraw
 Feb 22 17:26:20 brutal kernel: logitech-djreceiver 0003:046D:C52B.0007:
 logi_dj_probe:hid_hw_start returned error

 Do you have any idea?

Try disabling HID_LOGITECH, and rely on their base support for the HID
input device spec.

-- 
:wq



Re: [gentoo-user] Gentoo Raid install via ubuntu

2012-02-21 Thread Michael Mol
On Tue, Feb 21, 2012 at 2:39 PM, James wirel...@tampabay.rr.com wrote:
 Hello,

 Well someone has suggested that to install Gentoo on a Raid system,
 just use the latest version of Ubuntu to set up the raid. Then
 you can do a traditional install on top of the Ubuntu and
 you have a RAID install of Gentoo.

 Is this practical?

 Has anyone installed gentoo on ubuntu raid install?
 If so, your experiences?

I haven't tried anything that way, but is sounds like using Ubuntu as
a fancy bootstrap to replace the Gentoo live boot environment, and
seems unnecessary. Have you tried the Gentoo live DVD?

-- 
:wq



Re: [gentoo-user] Gentoo Raid install via ubuntu

2012-02-21 Thread Michael Mol
On Tue, Feb 21, 2012 at 3:07 PM, Neil Bothwick n...@digimed.co.uk wrote:
 On Tue, 21 Feb 2012 14:46:02 -0500, Michael Mol wrote:

  Has anyone installed gentoo on ubuntu raid install?
  If so, your experiences?

 I haven't tried anything that way, but is sounds like using Ubuntu as
 a fancy bootstrap to replace the Gentoo live boot environment, and
 seems unnecessary. Have you tried the Gentoo live DVD?

 I did this several years ago, because I wanted a functional distro to
 work with while compiling everything, a long task with the hardware of the
 day. It's no different to using a live CD for the job, just make sure the
 tools you need are installing in the host OS before you start.

I prefer to do it this way, as I can load up the Gentoo Handbook in a
browser and avoid the risk of some typos by copy/pasting some
commands. And if I hit an error[1], I can copy/paste if I need to dig
up someone else who's had a similar problem.

[1] And, really, every new box is unique, and I always find
*something* to file a bug report against...

-- 
:wq



Re: [gentoo-user] Gentoo Raid install via ubuntu

2012-02-21 Thread Michael Mol
On Tue, Feb 21, 2012 at 3:35 PM, Mark Knecht markkne...@gmail.com wrote:
 On Tue, Feb 21, 2012 at 12:23 PM, Michael Mol mike...@gmail.com wrote:
 On Tue, Feb 21, 2012 at 3:07 PM, Neil Bothwick n...@digimed.co.uk wrote:
 On Tue, 21 Feb 2012 14:46:02 -0500, Michael Mol wrote:

  Has anyone installed gentoo on ubuntu raid install?
  If so, your experiences?

 I haven't tried anything that way, but is sounds like using Ubuntu as
 a fancy bootstrap to replace the Gentoo live boot environment, and
 seems unnecessary. Have you tried the Gentoo live DVD?

 I did this several years ago, because I wanted a functional distro to
 work with while compiling everything, a long task with the hardware of the
 day. It's no different to using a live CD for the job, just make sure the
 tools you need are installing in the host OS before you start.

 I prefer to do it this way, as I can load up the Gentoo Handbook in a
 browser and avoid the risk of some typos by copy/pasting some
 commands. And if I hit an error[1], I can copy/paste if I need to dig
 up someone else who's had a similar problem.

 [1] And, really, every new box is unique, and I always find
 *something* to file a bug report against...

 Is it the only running machine and you con only do the install sitting
 at the machine?

Occasionally that's the most convenient approach, sure. Especially
when physical space is tight, or when the new box has much larger
display[s] available to it.


 I do most installs by booting the Gentoo install CD, enabling shh and
 then shelling in from another machine where I run the handbook and
 copy/paste the commandsin my shell terminal. No need for Ubuntu to do
 that unless the machine is somehow in isolation and doesn't have
 networking.

 That said, using Ubuntu might be a very good way to do it especially
 if you are going to build a RAID which isn't automatically recognizzed
 at bott by the kernel. I.e. - needs an initrd. Think metadata  0.9
 and things like RAID5 or 6.

 I once used Ubuntu to get a PowerPC machine booting Linux, then
 studied how Ubuntu did and did my Gentoo install from scratch on a
 different partition until it worked at which time I removed Ubuntu.

Note I never said I used Ubuntu for this process. I was using the
Gentoo live DVD. I don't see the need to use an Ubuntu disc over a
Gentoo disc, in this case.


-- 
:wq



Re: [gentoo-user] Re: Gentoo Raid install via ubuntu

2012-02-21 Thread Michael Mol
On Tue, Feb 21, 2012 at 3:41 PM, James wirel...@tampabay.rr.com wrote:
 Michael Mol mikemol at gmail.com writes:


   Has anyone installed gentoo on ubuntu raid install?
   If so, your experiences?

  I haven't tried anything that way, but is sounds like using Ubuntu as
  a fancy bootstrap to replace the Gentoo live boot environment, and
  seems unnecessary. Have you tried the Gentoo live DVD?

 It sets up the raid1 on /boot/root/swap (identical drives) really easy.
 Since GPTfdisk does not exist on any gentoo installation media, it
 sounds like the easiest (most direct path) to set up a RAID-1 gentoo
 workstation.

 https://help.ubuntu.com/community/Installation/SoftwareRAID

 Is this url the guide you used?
 Any other links?

This is all I've ever used:

http://www.gentoo.org/doc/en/handbook/index.xml

-- 
:wq



Re: [gentoo-user] Re: Somewhat OT: Any truth to this mess?

2012-02-20 Thread Michael Mol
On Mon, Feb 20, 2012 at 3:49 PM, Grant Edwards
grant.b.edwa...@gmail.com wrote:
 On 2012-02-20, Todd Goodman t...@bonedaddy.net wrote:
 * walt w41...@gmail.com [120219 15:37]:
 On 02/18/2012 05:18 AM, Dale wrote:

  Sounds like the internet could be switched off.  So, next question, how
  easy would it be to get it going again?  Hours?  Days?  Weeks?

 My guess is that the old farts that read this list could have their
 old dialup bulletin boards back on line in a day.  Probably on the
 original hardware gathering dust in the attic :p

 Naw, uucp on dialup on a Telebit Trailblazer 9600.  :-)

 It's been a while since I set up a uucp node, but I think I could
 manage it in a couple hours if required.  To paraphrase Damon Wayans,
 Homey don't play BBS.  I think I've got a USR sportster sitting
 around somewhere.  What I don't have is a POTS line.

Hey, my family ran a 53-line MajorBBS/Worldgroup setup back in the
day, with between 30-50 consumer hardware modems. (Those v.Everythings
were sweet... 3coms were good, too. Anything Motorola after their 14.4
model...not so much.) Don't knock the BBS. :)

Also, Citadel is still alive and kicking, too. :)

-- 
:wq



Re: [gentoo-user] Re: Somewhat OT: Any truth to this mess?

2012-02-20 Thread Michael Mol
On Mon, Feb 20, 2012 at 4:16 PM, Mark Knecht markkne...@gmail.com wrote:
 On Mon, Feb 20, 2012 at 1:04 PM, Michael Mol mike...@gmail.com wrote:
 SNIP
 Don't knock the BBS. :)

 hehe I met my wife on a Bay Area BBS called Matchmaker in the late
 1980's. Modems worked for me. ;-)

My parents (well, mother and stepfather) got together via the same BBS
they later bought and operated. :)

CyberSpace BBS, in Grand Rapids, MI. 616-454-7800 was the base number
for the hunt group. Doubled as an early dial-up ISP. Got my first bit
of tech support experience answering the support line; we supported
any OS, as long as I could figure out how to set up your DUN over the
phone. Not many ISPs offered to support Dreamcast. :)

-- 
:wq



Re: [gentoo-user] Re: Anybody using lightdm?

2012-02-19 Thread Michael Mol
On Sun, Feb 19, 2012 at 4:00 PM, Stefan G. Weichinger li...@xunil.at wrote:
 Am 2012-02-18 23:15, schrieb Grant:

 I switched to SLiM and I like it but I wish there were a way to
 restart and/or shutdown from the login screen.  I had to emerge xterm
 to make it work BTW.

 Could someone point out what is bad about gdm?
 Does it matter that much?

 What advantages does it have to use lightdm/slim/something else?

 Does it matter on modern powerful machines? Less packages to merge?

More dependencies, more moving parts, more stuff to build, more stuff
to break. And that's just a principle-driven overview. I don't have
any systems not running slim right now, so I can't do a direct
comparison without installing and configuring gdm.

The only significant use I might require gdm for would be XDMCP
support...slim doesn't appear to support that.

-- 
:wq



Re: [gentoo-user] /dev/dsp: Device or resource busy

2012-02-18 Thread Michael Mol
On Sat, Feb 18, 2012 at 10:55 AM,  meino.cra...@gmx.de wrote:
 Alex Schuster wo...@wonkology.org [12-02-18 16:52]:
 Hi there!

 I want to play Quake3. games-fps/quake3 works, but somehow extra stuff
 (maps, models) I downloaded and put into /opt/quake3/baseq3 is not being
 used. I think I had such problems before, so I used to run
 games-fps/quake3-bin instead. I have no sound.


 There are error messages:

 /dev/dsp: Device or resource busy
 Could not open /dev/dsp

 This is true, even as root I cannot echo something to this device. lsof
 and fuser return nothing, so I wonder what is using the device.

 I had sound problems with Quake3 in the past, and found a wrapper script
 quake3-sdl-sound [*], that worked. But now it doesn't.

 Any ideas? There will be a big match going on with my little sister this
 evening, so you see this is a really really important issue!

       Wonko


 Hi Alex,

 try (as root)

    fuser /dev/dsp

 to figure out, which task helds that device...

Typically, IME, Flash.

It should be possible to launch the binary using a wrapper that
redirects things to either ALSA or PulseAudio. I don't remember,
exactly; it's been a long time since I've had to deal with OSS-only
apps.


-- 
:wq



Re: [gentoo-user] Somewhat OT: Any truth to this mess?

2012-02-18 Thread Michael Mol
On Sat, Feb 18, 2012 at 10:34 AM, Dale rdalek1...@gmail.com wrote:
 Alan McKinnon wrote:
 On Sat, 18 Feb 2012 06:39:27 -0600
 Dale rdalek1...@gmail.com wrote:

 Volker Armin Hemmann wrote:
 Am Samstag, 18. Februar 2012, 06:00:00 schrieb Dale:


 I don't really think they can unless they just cut power to all the
 computers.  After all, the internet is supposed to be redundant
 right? If there is a few computers still running that have a
 connection, it is still working.  Sort of anyway.

 Does make one wonder tho.  They have been talking about having a
 internet off switch but I'm not sure it would be that easy.

 basically, yes. Take down the core routers and backbones and
 everything falls apart.


 But how long would it take to actually do this?

 Another thing, the Government, especially the military, uses the
 internet too.

 Not quite. They use the same internet *technology* you do, not
 necessarily the same internet *devices*.




 What about banks?  Credit cards?  Heck, even food stamp cards?  Would
 phones work?  I'm not just thinking about Vonage or Skype either.

Banks, credit cards, etc. mostly operate on leased lines (Think T1,
T2, T3...) and landlines (point-of-sale vending, though that's
changing. ATMs also operate on landlines, and I don't believe that's
changing.).

You'd still have access to your money. You'd just have to go to a bank
branch or an ATM.

This whole thread is full panicked reasoning. The biggest risk we face
is a scenario like Iran or Egypt's, where the government requires
controls on border routers. Most likely, they'd do it at the ISP
level, not at the core router level. That said, they could conceivably
demand core router operators acquiesce to their demands, but the worst
you're likely to see there is some network blocks' being dropped
offline.

And it's not so easy to take the Internet down with injected BGP
routes any more, either; most network operators apply some sort of
filtering.

-- 
:wq



Re: [gentoo-user] Somewhat OT: Any truth to this mess?

2012-02-18 Thread Michael Mol
On Sat, Feb 18, 2012 at 11:21 AM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:
 Am Samstag, 18. Februar 2012, 06:39:27 schrieb Dale:
 Volker Armin Hemmann wrote:
  Am Samstag, 18. Februar 2012, 06:00:00 schrieb Dale:
  I don't really think they can unless they just cut power to all the
  computers.  After all, the internet is supposed to be redundant right?
  If there is a few computers still running that have a connection, it is
  still working.  Sort of anyway.
 
  Does make one wonder tho.  They have been talking about having a
  internet off switch but I'm not sure it would be that easy.
 
  basically, yes. Take down the core routers and backbones and everything
  falls apart.

 But how long would it take to actually do this?

 minutes


Risk of physical destruction is why the price location of the core
routers was generally kept secret.


-- 
:wq



Re: [gentoo-user] Somewhat OT: Any truth to this mess?

2012-02-18 Thread Michael Mol
(Sorry for the top-post...I'm mobile atm.)

My understanding is that core network operators filter ASs for which they
don't have a contract for transit. I.e, if I were to get my own PI space,
I'd have to pay tier 1 networks (or pay someone to ride on *their*
contract) for a contract to have packets destined for my AS to be able to
reach me across their network.

ZZ
On Feb 18, 2012 1:04 PM, Pandu Poluan pa...@poluan.info wrote:

 On Sat, Feb 18, 2012 at 23:18, Michael Mol mike...@gmail.com wrote:
 

  8 snippage

 
  And it's not so easy to take the Internet down with injected BGP
  routes any more, either; most network operators apply some sort of
  filtering.
 

 Yes, there *are* filters against injecting BGP from non-trusted sources.

 But if the government somehow controls a Network Service Provider
 (NSP, the maintainers of Internet backbones), they can easily poison
 the BGP updates. Routers connected to the NSP will happily accept the
 poisoned updates since they rely on the NSP to provide big picture
 traffic management.

 Rgds,
 --
 FdS Pandu E Poluan
 ~ IT Optimizer ~

  • LOPSA Member #15248
  • Blog : http://pepoluan.tumblr.com
  • Linked-In : http://id.linkedin.com/in/pepoluan




Re: [gentoo-user] Somewhat OT: Any truth to this mess?

2012-02-18 Thread Michael Mol
And every time that's successful,  it's because some idiot admin wasn't
filtering their incoming BGP traffic properly. Ditto the network in Florida
which acted as a black hole for the entire Internet in the late 90s.

Proper training and filtering helps prevent these kinds of issues. It's
happened, sure. And it will happen again. And it will be recovered from
again. Policies will be adapted, trained and forgotten, again.

ZZ
On Feb 18, 2012 1:15 PM, Pandu Poluan pa...@poluan.info wrote:

 On Sat, Feb 18, 2012 at 21:36, Alan McKinnon alan.mckin...@gmail.com
 wrote:
  On Sat, 18 Feb 2012 06:00:00 -0600
  Dale rdalek1...@gmail.com wrote:
 
   And no, the intartubes will NOT be switched off.
  
 
  I don't really think they can unless they just cut power to all the
  computers.  After all, the internet is supposed to be redundant right?
  If there is a few computers still running that have a connection, it
  is still working.  Sort of anyway.
 
  Does make one wonder tho.  They have been talking about having a
  internet off switch but I'm not sure it would be that easy.
 
  To switch off the internet, you don't switch off the computers on the
  internet. You switch off the routers that drive the internet.
 

 You don't need to turn off the routers.

 Just inject BGP poison.

 I just re-found the news:


 http://www.computerworld.com/s/article/9197019/Update_Report_sounds_alarm_on_China_s_rerouting_of_U.S._Internet_traffic

 The article I linked above contains 2 incidents:

 The first incident rerouted traffic for a huge swath of Internet,
 including traffic destined to Microsoft, the Office of the USA SecDef,
 and others.

 The second incident blocked traffic for some sites, notably Twitter,
 Yahoo, and Facebook.

 BOTH incidents happened because of BGP poisoning. BOTH incidents
 affected traffic FROM the USA to destinations IN the USA even though
 the poisoning happened from OUTSIDE of the USA.

 The country where both incidents happened (in these cases, China) is
 not essential. ANY country with a BGP router connected to the backbone
 can easily poison other international backbone routers. Especially if
 said country has a HUGE International bandwidth.

 Rgds,
 --
 FdS Pandu E Poluan
 ~ IT Optimizer ~

  • LOPSA Member #15248
  • Blog : http://pepoluan.tumblr.com
  • Linked-In : http://id.linkedin.com/in/pepoluan




Re: [gentoo-user] alternative to thunderbird?

2012-02-18 Thread Michael Mol
On Sat, Feb 18, 2012 at 3:28 PM, Grant emailgr...@gmail.com wrote:
 I just switched from firefox to chromium (thanks to you guys) and I'm
 loving it.  What would you recommend for getting away from
 thunderbird?  I'm looking for something simple and minimal.

I still stick with Thunderbird or Evolution, myself. Once upon a time,
I used Balsa, though. You might look at that.

If you're already building KDE, KMail was decent.

-- 
:wq



Re: [gentoo-user] alternative to thunderbird?

2012-02-18 Thread Michael Mol
On Sat, Feb 18, 2012 at 3:36 PM, Alan McKinnon alan.mckin...@gmail.com wrote:
 On Sat, 18 Feb 2012 12:28:44 -0800
 Grant emailgr...@gmail.com wrote:

 I just switched from firefox to chromium (thanks to you guys) and I'm
 loving it.  What would you recommend for getting away from
 thunderbird?  I'm looking for something simple and minimal.

 - Grant


 claws-mail

 It takes some getting used to as all the nice
 cute gui stuff is not present. It sends and receives mail, does a very
 good job of filtering, is somewhat passable with rendering html mail
 (and absolutely refuses to generate it). Everything else that bloats
 mail clients is absent.

 It goes without saying that under no circumstances must you even
 consider using kdepim

Yeah, I should probably add a disclaimer: I haven't seriously used KDE
since before either KDE3 or GNOME 2 came out...

-- 
:wq



Re: [gentoo-user] Anybody using lightdm?

2012-02-18 Thread Michael Mol
On Sat, Feb 18, 2012 at 4:11 PM, Grant emailgr...@gmail.com wrote:
 Has anyone set up lightdm?  I'm using it with the default config file
 but I get a black screen with no error in Xorg.0.log.  gdm works fine.
  Any ideas?

I'm not using lightdm, but my understanding is that it's as minimalist
as you can get while still technically using a display manager. Check
into its configuration in /etc/config, etc. Find out exactly what it's
using (launching) for an xinitrc.

Also check out ~/.X*, and see if there are any user-local errors or logs of use.

-- 
:wq



Re: [gentoo-user] Alternative to firefox?

2012-02-17 Thread Michael Mol
On Fri, Feb 17, 2012 at 4:28 PM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:
 Am Dienstag, 14. Februar 2012, 12:41:25 schrieb Grant:
 Has anyone found a GUI alternative to firefox they like that's in
 portage?  Something minimal preferably but with flash support?

 - Grant

 konqueror or chromium.

Konqueror 'minimal'? Doesn't it come with all of KDE? ;P

(To be fair, I liked Konqueror back when I was using KDE around 2001.)


-- 
:wq



Re: [gentoo-user] rsync from ext3 to vfat?

2012-02-17 Thread Michael Mol
On Fri, Feb 17, 2012 at 5:11 PM, Manuel McLure man...@mclure.org wrote:
 On Fri, Feb 17, 2012 at 1:03 PM, m...@trausch.us m...@trausch.us wrote:
 rsync works just fine with any normal set of options when using any sort
 of FAT as a destination.  There are, of course, a couple of gotchas:

  - FAT has limitations on file sizes.
  - FAT cannot store permissions or ACLs
  - FAT does not support extended attributes

 Other than that, though, you should be good.

 Add FAT considers two filenames that are the same except for case as
 the same filename to that list. NTFS has the same limitation.

No. http://support.microsoft.com/kb/100625

However, ntfs-3g only creates files in the POSIX namespace on NTFS,
which means that, depending on the filename, some files you create on
Linux won't be able to be opened by apps (such as the Windows shell)
that rely on assumptions not violated by the DOS and Win32 namespaces.

I ran into that one. .

-- 
:wq



Re: [gentoo-user] grub vs grub 2

2012-02-15 Thread Michael Mol
On Wed, Feb 15, 2012 at 7:19 AM, Tanstaafl tansta...@libertytrek.org wrote:
 On 2012-02-14 6:19 PM, m...@trausch.us m...@trausch.us wrote:

 If you're interested, I can detail a history for you, and explain why
 GRUB 1 was discontinued and why the whole thing was restructured in
 detail.  I can't right now, as I am about to get on a conference call,
 but I can certainly do so later tonight or tomorrow if you want.


 What I would prefer is a detailed yet simple 'How-To' aimed at the average
 user rather than the hacker (in other words, don't assume I can read
 code/scripts and understand all or even some of what is happening) or write
 my own scripts, etc...

 Also, I'd prefer this How-To be aimed at current users of GRUB Legacy,
 meaning, 'This is how Legacy did it, but now GRUB2 does it this way, and for
 this reason'...

Just from reading Mike's earlier post, it sounds like the answer to
the bulk of those would be:
1) Legacy GRUB didn't do that. Your distro patched it to do that.
2) GRUB2 does it this way. Because legacy grub didn't do that.



 And last, a lot of examples of comparisons of GRUB-Legacy/GRUB2 config files
 for different types of systems (obviously, these should include all of the
 most common systems, then more esoteric ones can be added by those using
 them)...


It sounds like you're asking for a cookbook. (Which is admittedly
something I wondered about yesterday)

Still, it sounds like most of your menuentries could follow a template
like this:

menuentry '$name' --class gnu-linux --class
gnu --class os {
       insmod $partition_table_module
       insmod $filesystem_modules
       set root='(/dev/which_disk,$partition_table_entry_identifier)'
       search --no-floppy --fs-uuid --set=root
$UUID_OF_ROOT_FILESYSTEM
       linux   $path_to_kernel_from_boot root=/dev/root_fs_disk ro

(Shamelessly adapted from Mike's sample. I removed the video and echo,
as I suspect they're not necessary, and I removed the gzio module,
though I don't know what it's needed for)

-- 
:wq



Re: [gentoo-user] Restrict site access by SSL Client Cert?

2012-02-15 Thread Michael Mol
On Wed, Feb 15, 2012 at 9:46 AM, Tanstaafl tansta...@libertytrek.org wrote:
 Hi everyone,

 I know that you can restrict access to a certain site using either Basic
 HTTP Auth or Digest Auth, but I was wondering - can you do the same with an
 SSL Client Certificate?

 I'd like to prevent access to an ancient web based database to only users
 that have a Client Cert that I created for them installed.

 Is this possible? I'd also like to provide for IP based exceptions if
 possible, but if I can't do both, I'll just install the Cert for everyone.

Two ways (that I know of) to do this:

1) Configure a front-end proxy like squid to do it.
2) Configure Apache to do it.

I haven't done it myself, though, and I hear the error messages the
OpenSSL libraries give you are cryptic.

-- 
:wq



Re: [gentoo-user] grub vs grub 2

2012-02-14 Thread Michael Mol
On Tue, Feb 14, 2012 at 1:52 PM, m...@trausch.us m...@trausch.us wrote:
 On 02/14/2012 01:40 PM, LK wrote:

 On 120214, at 19:24, m...@trausch.us wrote:
 On 02/14/2012 01:08 PM, LK wrote:
 BTW: So is grub0 still supported by gentoo / maintained by themselves?
 Does that matter(it is boot, no network stuff) ?
 GRUB Legacy (that is, GRUB versions 0.xx) is still the default in
 Gentoo.  In order to use GRUB 2 (that is, GRUB version 1.99 in Portage)
 you'll have to unmask sys-boot/grub-1.99-r2.

 The thing is, IMO grub0 is better / simplier.


 I disagree.  GRUB Legacy is not the same in any two distributions
 because every single distribution patches it differently because it
 hasn't had core functionality updated in a very long time.  It's pretty
 much abandoned by upstream, as well.

 I'm not saying that it is bad, but I _am_ saying that it has outlived
 its usefulness.

 GRUB 2 follows an entirely different architecture.

A detailed elaboration would be nice.

A contrasting migration guide, complete with the how's, where's and
why's would be awesome. (Once one's invested in understanding a tool,
a 1-2-3-itsmagic walkthrough is very discomforting.)

-- 
:wq



Re: [gentoo-user] grub vs grub 2

2012-02-14 Thread Michael Mol
On Tue, Feb 14, 2012 at 3:35 PM, m...@trausch.us m...@trausch.us wrote:
 On 02/14/2012 02:04 PM, Michael Mol wrote:
 A detailed elaboration would be nice.

 A contrasting migration guide, complete with the how's, where's and
 why's would be awesome. (Once one's invested in understanding a tool,
 a 1-2-3-itsmagic walkthrough is very discomforting.)

 While there are many different points that differ between the two, the
 biggest are:

  - Supported upstream.

  - Can boot from GPT as well as MBR partition table types, regardless
    of whether EFI is in use or not.  Also supports the use of Apple
    Partition Maps, BSD disk labels, and others through modules.

  - Doesn't require patching to deal with modern situations; you can
    download upstream source code and it will work, unlike GRUB Legacy.

  - Can boot from virtually any filesystem you would want to use,
    not just a small handful of them; includes ISO9660, UDF, Reiser,
    btrfs, NTFS, ZFS, HFS and HFS+, among others.

  - Supports selecting filesystems by UUID without distribution-specific
    patches, for filesystem types that can be identified by UUIDs.

  - Can be booted from BIOS or EFI on the PC, and no longer depends on
    the existence of any particular type of firmware (no more probing
    for BIOS boot drives, which can fail on many different systems).

    This means that GRUB 2 doesn't have to be hand-installed on systems
    GRUB Legacy couldn't figure out for whatever reason.  And yes, there
    were a good number of them, where LILO was the only choice due to
    its use of block maps (another not-so-robust booting mechanism which
    required significantly more maintenance than GRUB does).

  - Can boot Linux, the BSDs, any Multiboot or Multiboot2 kernel, and
    EFI applications.

  - Supports El Torito natively on platforms that use it (e.g., BIOS)
    to boot optical media, meaning that it is possible to use GRUB 2
    boot anything that can be burned to an optical disk.  This makes it
    easier to work with testing environments burned to any form of
    optical disk.

  - Better code quality than GRUB Legacy, with more loose coupling
    between components and making it possible for people to more easily
    write GRUB modules than with GRUB Legacy.  Additionally, nearly
    anything that would have been a patch to GRUB Legacy can be written
    as a module in GRUB 2, making it easier to share modules between
    distributions.  This also means it is *much* more portable.

  - Can be run as an EFI application on modern systems using EFI, such
    as Intel-based Macintosh systems, without requiring BIOS emulation.
    It can also emulate an EFI environment for things which require it
    in order to boot.

  - Eliminates dependence on BIOS in order to determine available boot
    devices.  This empowers GRUB to be able to boot without firmware
    assistance from many different mediums, including USB and PXE, even
    without firmware support.

  - Supports booting from Linux device-mapper and LVM2 configurations,
    as well as encrypted partitions.

  - Supports kernels  16 MB in size without patches.  This can happen
    when you compile a purely static kernel and support a great deal of
    options without putting them into modules.  Not common, but does
    happen.

 Additionally, GRUB 2 standardizes (upstream) a number of things which
 were developed independently by various distributions as patches for
 GRUB Legacy.  Gentoo's legacy GRUB is heavily patched,

 The configuration file isn't terribly difficult to figure out, either;
 as I've mentioned before, there is *absolutely* no requirement to use
 grub2-mkconfig, it just makes life easier.

 For example, here is the entry that boots my current kernel:

 menuentry 'GNU/Linux, with Linux 3.2.5-gentoo' --class gnu-linux --class
 gnu --class os {
        load_video
        insmod gzio
        insmod part_gpt
        insmod ext2
        set root='(/dev/sda,gpt2)'
        search --no-floppy --fs-uuid --set=root
 3820beff-80b5-4d05-b989-3ab9265bc2a3
        echo    'Loading Linux 3.2.5-gentoo ...'
        linux   /vmlinuz-3.2.5-gentoo root=/dev/sda3 ro

 Adding an entry is no more complex as it was before; copy, paste, edit.
  Simple.  No commands necessary since GRUB reads the grub.cfg file from
 the filesystem when it loads, and doesn't embed it anywhere.

 (And yes, I have a separate /boot; reason being is that it is mounted -o
 sync, that is, when it is mounted at all.  At least on my primary
 desktop system; /boot is actually on the root fs on most of my systems.)

 There will be a day when GRUB Legacy won't be supported by distributions
 at all.  There's no need to maintain multiple bootloaders (and upstream
 refuses to do so, reasonably), and many of the tricks, patches and
 workarounds of old are no longer necessary with GRUB 2.

 Also, it becomes possible to use the Linux kernel's long-existing
 installation hook to automatically update the boot list when you

Re: [gentoo-user] Alternative to firefox?

2012-02-14 Thread Michael Mol
On Tue, Feb 14, 2012 at 3:41 PM, Grant emailgr...@gmail.com wrote:
 Has anyone found a GUI alternative to firefox they like that's in
 portage?  Something minimal preferably but with flash support?

I mostly use Chromium. IIRC, there's also Galeon. You'd have to look
at the current state of the ebuild to see how little (or much) of Gtk
and GNOME you'd have to pull in.

-- 
:wq



Re: [gentoo-user] grub vs grub2

2012-02-14 Thread Michael Mol
On Tue, Feb 14, 2012 at 3:58 PM, LK linuxrocksrul...@googlemail.com wrote:
 What do you think of putting this conversation onto some website, as tutorial 
 or clarification =P ?

http://archives.gentoo.org/gentoo-user/msg_ee5c878773ac6ca9f49a33191654e3db.xml

-- 
:wq



Re: [gentoo-user] Alternative to firefox?

2012-02-14 Thread Michael Mol
On Tue, Feb 14, 2012 at 4:29 PM, Alecks Gates fuzzylunk...@gmail.com wrote:

 On Feb 14, 2012 4:16 PM, Paul Hartman paul.hartman+gen...@gmail.com
 wrote:

 On Tue, Feb 14, 2012 at 2:41 PM, Grant emailgr...@gmail.com wrote:
  Has anyone found a GUI alternative to firefox they like that's in
  portage?  Something minimal preferably but with flash support?

 Chromium/Chrome, Opera, Konqueror... flash works in all of those and
 are all fast and minimalistic compared to Firefox. Probably Epiphany,
 too, but I don't use Gnome so I haven't tried it in years.

 Firefox is quite fast and is also the only browser I've found that can
 easily manage hundreds of tabs + amazing addons.  Firefox has no true
 alternative, if you consider everything.  The last time I tried Epiphany
 flash didn't appear to work out of the box, but I might have done something
 wrong too.

Try Chromium and Seamonkey.

Also, the phrase ...has no true alternative, if you consider
everything. makes you sound like a fanboy. Be careful with that.

-- 
:wq



[gentoo-user] HTPC and Gentoo

2012-02-11 Thread Michael Mol
So I've got Inara in position to be my HTPC box. Tried installing
MythTV, but I can't make heads or tails of how to have it do the
things I'd like it to do:

* Play DVDs inserted into the DVD drive
* Hit streaming websites like Netflix, Hulu, Amazon Prime, Youtube, etc.
* Play video and audio files I already have.
* Launch games like StepMania and/or Frets on Fire.

( Note, I don't have a video capture card installed. I haven't yet
picked one up that will do ATSC. Once I do, I'll be attaching it to an
antenna. )

I've looked at both the official Gentoo wiki page and the page on the
unofficial Gentoo wiki, but neither gets me any farther than I already
am. I'm not even committed to using MythTV; any fullscreen HTPC
interface that operates at 720p is fine.

-- 
:wq



Re: [gentoo-user] Recommended VPN Tunnel client?

2012-02-10 Thread Michael Mol
On Thu, Feb 9, 2012 at 10:48 PM, Pandu Poluan pa...@poluan.info wrote:
 Scenario: I have a server in the cloud that needs to connect to an internal
 server in the office. There are 2 incoming connections into my office, ISP
 A and ISP B. The primary connection is A, but if A goes down, we can use
 B. The app running on the cloud server has no automatic failover ability
 (i.e., if A goes down, someone must change the app's conf to point to B).

 My thought: If I can make a tunnel from the server to the FortiGate firewall
 currently guarding the HQ, the cloud app can simply be configured to connect
 to the internal IP address of the internal server. No need to manually
 change the app's conf.

 The need: a VPN client that:
 + can selectively send packets fulfilling a criteria (in this case, dest= IP
 address of internal server)*
 + has automatic failover and failback ability

 *solutions involving iptables and iproute2 are also acceptable

 Can anyone point me to the right direction re: what package and the relevant
 howto?

 Thanks in advance.

 Rgds,

Not exactly what you're looking for, but this might help:

http://www.ntop.org/products/n2n/

That would set up reliable visibility on layer 2. You probably want to
employ something like 802.1x on top of it.
-- 
:wq



Re: [gentoo-user] Re: Recommended VPN Tunnel client?

2012-02-10 Thread Michael Mol
On Fri, Feb 10, 2012 at 12:29 PM, Pandu Poluan pa...@poluan.info wrote:

 On Feb 11, 2012 12:16 AM, Michael Orlitzky mich...@orlitzky.com wrote:

 On 02/10/12 11:46, Pandu Poluan wrote:
 
  On Feb 10, 2012 10:08 PM, Mick michaelkintz...@gmail.com
  mailto:michaelkintz...@gmail.com wrote:
 
   
The need: a VPN client that:
+ can selectively send packets fulfilling a criteria (in this
  case, dest=
IP address of internal server)*
 
  As far as I know typical VPNs require the IP address (or FQDN) of the
  VPN
  gateway.  If yours changes because ISP A goes down then the tunnel
  will fail
  and be torn down.

 I must have missed the original message. OpenVPN can do this. Just
 specify multiple remote vpn.example.com lines in your client configs,
 one for each VPN server.

 It also handles updating the routing table for you. Rather than match
 IP address of internal server, it will match IP address on internal
 network and route through the VPN automatically.


 I'm still torn between OpenVPN and HAproxy. The former works with both TCP
 and UDP, while the latter is lighter and simpler but works with TCP only*.

 *The traffic will be pure TCP, but who knows I might need a UDP tunnel in
 the future.

 Any experience with either?

 Do note that I don't actually need a strong security (e.g. IPsec); I just
 need automatic failover *and* fallback.

We're not using multiple internet connections to the same network
where I work, but we do use UDP-based OpenVPN to connect a few
networks.

TCP OpenVPN connections are very, very bad, IMO. With a TCP VPN, you
easily break systems' TCP stacks' link bandwidth estimation. I once
had a 30s ping time, because the pipe was hogged and backlogged from a
mail client synchronizing.

-- 
:wq



Re: [gentoo-user] Re: Recommended VPN Tunnel client?

2012-02-10 Thread Michael Mol
On Fri, Feb 10, 2012 at 1:05 PM, Pandu Poluan pa...@poluan.info wrote:

 On Feb 11, 2012 12:42 AM, Michael Mol mike...@gmail.com wrote:

 On Fri, Feb 10, 2012 at 12:29 PM, Pandu Poluan pa...@poluan.info wrote:
 
  On Feb 11, 2012 12:16 AM, Michael Orlitzky mich...@orlitzky.com
  wrote:
 
  On 02/10/12 11:46, Pandu Poluan wrote:
  
   On Feb 10, 2012 10:08 PM, Mick michaelkintz...@gmail.com
   mailto:michaelkintz...@gmail.com wrote:
  

 The need: a VPN client that:
 + can selectively send packets fulfilling a criteria (in this
   case, dest=
 IP address of internal server)*
  
   As far as I know typical VPNs require the IP address (or FQDN) of
   the
   VPN
   gateway.  If yours changes because ISP A goes down then the tunnel
   will fail
   and be torn down.
 
  I must have missed the original message. OpenVPN can do this. Just
  specify multiple remote vpn.example.com lines in your client configs,
  one for each VPN server.
 
  It also handles updating the routing table for you. Rather than match
  IP address of internal server, it will match IP address on internal
  network and route through the VPN automatically.
 
 
  I'm still torn between OpenVPN and HAproxy. The former works with both
  TCP
  and UDP, while the latter is lighter and simpler but works with TCP
  only*.
 
  *The traffic will be pure TCP, but who knows I might need a UDP tunnel
  in
  the future.
 
  Any experience with either?
 
  Do note that I don't actually need a strong security (e.g. IPsec); I
  just
  need automatic failover *and* fallback.

 We're not using multiple internet connections to the same network
 where I work, but we do use UDP-based OpenVPN to connect a few
 networks.

 TCP OpenVPN connections are very, very bad, IMO. With a TCP VPN, you
 easily break systems' TCP stacks' link bandwidth estimation. I once
 had a 30s ping time, because the pipe was hogged and backlogged from a
 mail client synchronizing.


 No, no, no. What I meant was running TCP and UDP *on top of* OpenVPN (which
 uses UDP).

 HAproxy seems to be able to perform its magic with TCP connections.

That's what I was talking about. Where I work, we use OpenVPN,
operating in UDP mode. This is after several bad experiences using it
in TCP mode.

By UDP mode and TCP mode, I mean OpenVPN's connections to other
OpenVPN nodes were in UDP or TCP, respectively. When OpenVPN's
connections operate over TCP (and thus it gets guarantee'd delivery),
you can create a situation where a tunneled TCP connection attempts to
push data faster than your Internet connection can allow because it
never gets any congestion feedback; OpenVPN was accepting packets
faster than it could shove them through, and was buffering the rest.

In the situation I encountered, I was syncing my email over the vpn,
but I couldn't quickly reach any internal services; their response
time got slower and slower until I bounced my openvpn daemon (breaking
any outstanding tunneled TCP connections), but then they rapidly
degraded again. Towards the end, I discovered I had a non-tunneled
ping time of 100 ms, but a tunneled ping time of 30m.

If HAProxy is smart about congestion management, you shouldn't see
this behavior. If not, you may.

-- 
:wq



Re: [gentoo-user] Re: Recommended VPN Tunnel client?

2012-02-10 Thread Michael Mol
On Fri, Feb 10, 2012 at 1:22 PM, Todd Goodman t...@bonedaddy.net wrote:
 * Michael Mol mike...@gmail.com [120210 12:51]:
 [..]
 That's what I was talking about. Where I work, we use OpenVPN,
 operating in UDP mode. This is after several bad experiences using it
 in TCP mode.

 By UDP mode and TCP mode, I mean OpenVPN's connections to other
 OpenVPN nodes were in UDP or TCP, respectively. When OpenVPN's
 connections operate over TCP (and thus it gets guarantee'd delivery),
 you can create a situation where a tunneled TCP connection attempts to
 push data faster than your Internet connection can allow because it
 never gets any congestion feedback; OpenVPN was accepting packets
 faster than it could shove them through, and was buffering the rest.

 So obviously OpenVPN wasn't handling congestion appropriately and should
 have been using some queueing discipline to discard instead of letting
 transmit queues grow unbounded.

Sure, that contributed to the problem, and may qualify as a bug. On
the flip side, by operating OpenVPN in TCP mode, you're saying you
want guaranteed delivery across the link.


 But switching to UDP from TCP just pushes the problem off your OpenVPN
 gateway and onto the outside network.

 If you're really receiving more traffic than can be sent over the
 outside network, now you're relying on intermediate routers to do the
 right thing with your excess UDP traffic and most likely impacting TCP
 traffic through the same router.

OpenVPN was running on the router on both ends. The sending side was
on the lean side of an ADSL modem, plugged directly into the same, so
the entire issue was handled locally.

Even if OpenVPN wasn't running on the router itself, there'd wouldn't
*be* excess UDP traffic when running OpenVPN in UDP mode, as
congestion management would be behaving properly. so other traffic
would not be unduly impacted.

-- 
:wq



Re: [gentoo-user] Re: Recommended VPN Tunnel client?

2012-02-10 Thread Michael Mol
On Fri, Feb 10, 2012 at 2:21 PM, Todd Goodman t...@bonedaddy.net wrote:
 * Michael Mol mike...@gmail.com [120210 13:36]:
 On Fri, Feb 10, 2012 at 1:22 PM, Todd Goodman t...@bonedaddy.net wrote:
  * Michael Mol mike...@gmail.com [120210 12:51]:
  [..]
  That's what I was talking about. Where I work, we use OpenVPN,
  operating in UDP mode. This is after several bad experiences using it
  in TCP mode.
 
  By UDP mode and TCP mode, I mean OpenVPN's connections to other
  OpenVPN nodes were in UDP or TCP, respectively. When OpenVPN's
  connections operate over TCP (and thus it gets guarantee'd delivery),
  you can create a situation where a tunneled TCP connection attempts to
  push data faster than your Internet connection can allow because it
  never gets any congestion feedback; OpenVPN was accepting packets
  faster than it could shove them through, and was buffering the rest.
 
  So obviously OpenVPN wasn't handling congestion appropriately and should
  have been using some queueing discipline to discard instead of letting
  transmit queues grow unbounded.

 Sure, that contributed to the problem, and may qualify as a bug. On
 the flip side, by operating OpenVPN in TCP mode, you're saying you
 want guaranteed delivery across the link.

 Yes, certainly.  And certainly TCP has far more resource requirements on the
 sending side.  However, it also has congestion avoidance built in to it,
 which UDP does not.

And that's perfectly fine, when you're going to be tunneling an entire
IP stack inside OpenVPN. If a tunneled application needs low latency,
low guarantee of delivery, it can use UDP. If a tunneled application
needs guarantee of delivery, it can use TCP. But if the OpenVPN tunnel
is itself using TCP, you lose low latency opportunities, and you deny
your tunneled applications' ability to respond to congestion.



 
  But switching to UDP from TCP just pushes the problem off your OpenVPN
  gateway and onto the outside network.
 
  If you're really receiving more traffic than can be sent over the
  outside network, now you're relying on intermediate routers to do the
  right thing with your excess UDP traffic and most likely impacting TCP
  traffic through the same router.

 OpenVPN was running on the router on both ends. The sending side was
 on the lean side of an ADSL modem, plugged directly into the same, so
 the entire issue was handled locally.

 There was no infrastructure between the two routers?  They had a direct
 connection between them?  It would be slightly strange to go through the
 hassle of running OpenVPN in that case...

workstation - ovpn - ADSL 6Mbs/512Kbs - ATT - ADSL(6Mbs/512Kbs) - ovpn - server

Both sides would be pushing up the weak end of ADSL, and both sides'
local routers would be discarding layer 3 packets that won't fit. ATT
wouldn't even have seen the excess traffic.



 Even if OpenVPN wasn't running on the router itself, there'd wouldn't
 *be* excess UDP traffic when running OpenVPN in UDP mode, as
 congestion management would be behaving properly. so other traffic
 would not be unduly impacted.

 Why do you think congestion management would be behaving properly?  What
 congestion management are you referring to for UDP traffic?

The fact that the tunneled TCP packets and fragments would be dropped,
causing the tunneled connections' relevant TCP stacks to scale back.


 The only thing intermediate routers can do in the case of congestion due
 to UDP traffic is to drop.  And depending on the queueing implementation
 they may end up dropping TCP traffic as well.

Which is *fine*, as long as the TCP packets are encapsulated inside
the tunnel, and the tunnel itself is UDP; the connection owners for
the encapsulated tunnels would scale back their throughput
automatically. If the TCP packet dropped is what's carrying the tunnel
itself, then one of the openvpn instances will resend, and the
encapsulated connection's packet will still ultimately reach its
destination.


 Almost certainly they'll signal congestion to TCP endpoints with traffic
 through them, hence impacting TCP traffic as well.

Not sure what you mean here.

Michael Orlitsky had a decent, relevant link:
http://sites.inka.de/sites/bigred/devel/tcp-tcp.html

Though instead of stacking TCP/IP/PPP on top of SSH/TCP/IP, I was
packing IMAP/TCP/IP on top of OpenVPN/TCP/IP.

-- 
:wq



Re: [gentoo-user] Google - can not open any link

2012-02-09 Thread Michael Mol
On Thu, Feb 9, 2012 at 8:48 AM, Joseph syscon...@gmail.com wrote:
 On 02/09/12 09:30, Michael Hampicke wrote:

 You are right on; I disable Java in firefox and now can open links in
 google search.


 As I said, I have disabled cookies and javascript for goolge.de (using
 cookiemonster an noscript). But as I sometimes use google maps, I allow
 javascript on google.com and then use google.com for maps - with cookies
 disalbed. And here I have the same issue as you have, I cannot open the
 links, but the strange thing is: it's not all the time, sometimes it
 works, sometimes it does not.

 Who knows what google guys were doing. Maybe they left a screw driver in
 there or something :)


 I just got a hint from Gentoo forum.  It seems to me others got hit by the
 same issue.
 quote--
 Google is getting crappier and crappier by the day. What happened to the
 lean, mean, simple search engine that everyone fell in love with? Now
 they're just bloating it with unnecessary javascript crap.

 Hmm, one idea: Do you accept cookies from google? I don't. If you don't
 either, try activating them and see if it works then.
 --end quote--

 I'm switching to Bing search engine. It seems to me Google is getting evil
 now.

I very much like DuckDuckGo. It uses Bing as its back-end, but applies
filters, etc, to clean out known spam and copy-and-ad sites. If I
can't find something with DuckDuckGo, then I try Google as a followup.
(DDG doesn't handle long literals very well; queries for things like
long model numbers don't usually work out.)

You can also use it as a calculator, like you can Google, but it
defers calculations (and other relevant things) to Wolfram Alpha.
Queries on some other things will quote verbatim Wikipedia, and you
can click through to get to the WP page.

It's very nice.

-- 
:wq



Re: [gentoo-user] Recovering MySQL Database from EXT4 Formatted Hard Disk ...

2012-02-08 Thread Michael Mol
On Tue, Feb 7, 2012 at 6:28 PM, Christopher Kurtis Koeber
ckoe...@gmail.com wrote:
 Hello,

 I am trying to recover MySQL databases (which were properly shut down) from
 an EXT4 formatted hard disk.

What happened to require the recovery? Which parts of the database
server shut down properly, and which didn't?

 I loaded the SystemRescueCD distro that you can
 get online and when running TestDisk I can see the partitions but I cannot
 recover said partitions because it tells me the structure is bad (any
 options here, by the way?)

What kind of partition table was it?


 With PhotoRec, I can recover parts of the MySQL Database but I cannot get
 the important *.MYD files because I guess PhotoRec doesn't have the
 signatures for that type of file.

 So, any options I have at this point?

Probably.


 Thank you for your time.




-- 
:wq



Re: [gentoo-user] (s)mplayer problems

2012-02-08 Thread Michael Mol
On Wed, Feb 8, 2012 at 8:40 AM, Helmut Jarausch
jarau...@igpm.rwth-aachen.de wrote:
 On 02/08/2012 02:07:55 PM, Nilesh Govindrajan wrote:
 On Wed 08 Feb 2012 06:31:19 PM IST, Helmut Jarausch wrote:
  Hi,
 
  I need some advice.
 
  Since a short time I have tremendous problems with mplayer /
 smplayer
  and I don't know why.
 
  First, vlc works just fine, i.e. video and audio
 
  Second, mplayer produces a segment fault within fglrx (ati-
  drivers-12.1-r1 with gentoo-sources-3.2.x)
 
  Third, smplayer does show the video (without a segment fault) but
  doesn't play audio.
 
  How can I isolate the problem?
 
  Many thanks for a hint,
  Helmut.
 
 

 Run mplayer on the command line and see what error it throws. Paste
 it

 here.

 Unfortunately, that's impossible.

mplayer $somefile 21  mplayerlog.txt



 mplayer starts, opens a window and then segfaults, i.e. kills Xorg and
 forces me to reboot the machine.

 Xorg.0.log.old shows


 Backtrace:
 [  1669.886] 0: /usr/bin/X (xorg_backtrace+0x26) [0x564f86]
 [  1669.886] 1: /usr/bin/X (0x40+0x168bc9) [0x568bc9]
 [  1669.886] 2: /lib64/libpthread.so.0 (0x7fa7aa5f4000+0x10ff0)
 [0x7fa7aa604ff0]
 [  1669.887] 3: /usr/lib64/xorg/modules/drivers/fglrx_drv.so
 (xs111LookupPrivate+0x22) [0x7fa7a778c372]
 [  1669.887] 4: /usr/lib64/xorg/modules/drivers/fglrx_drv.so
 (xclLookupPrivate+0xd) [0x7fa7a7167cdd]
 [  1669.887] 5: /usr/lib64/xorg/modules/amdxmm.so (X740XvPutImage
 +0x12e) [0x7fa7a441f81e]
 [  1669.887] 6: /usr/bin/X (0x40+0x8a84e) [0x48a84e]
 [  1669.888] 7: /usr/lib64/xorg/modules/extensions/libextmod.so
 (0x7fa7a83f8000+0xf53e) [0x7fa7a840753e]
 [  1669.888] 8: /usr/bin/X (0x40+0x36979) [0x436979]
 [  1669.888] 9: /usr/bin/X (0x40+0x2613a) [0x42613a]
 [  1669.888] 10: /lib64/libc.so.6 (__libc_start_main+0xed)
 [0x7fa7a952b3cd]
 [  1669.888] 11: /usr/bin/X (0x40+0x2645d) [0x42645d]
 [  1669.888] Segmentation fault at address 0x20

 Thanks,
 Helmut.


Does rebuilding mplayer fix it?


-- 
:wq



Re: [gentoo-user] Google privacy changes

2012-02-08 Thread Michael Mol
On Wed, Feb 8, 2012 at 10:46 AM, Paul Hartman
paul.hartman+gen...@gmail.com wrote:
 On Wed, Feb 8, 2012 at 2:55 AM, Pandu Poluan pa...@poluan.info wrote:

 On Jan 27, 2012 11:18 PM, Paul Hartman paul.hartman+gen...@gmail.com
 wrote:


  8 snippage


 BTW, the Baidu spider hits my site more than all of the others combined...


 Somewhat anecdotal, and definitely veering way off-topic, but Baidu was the
 reason why my company decided to change our webhosting company: Its
 spidering brought our previous webhosting to its knees...

 Rgds,

 I wonder if Baidu crawler honors the Crawl-delay directive in robots.txt?

 Or I wonder if Baidu cralwer IPs need to be covered by firewall tarpit rules. 
 ;)

I don't remember if it respects Crawl-Delay, but it respects forbidden
paths, etc. I've never been DDOS'd by Baidu crawlers, but I did get
DDOS'd by Yahoo a number of times. Turned out the solution was to
disallow access to expensive-to-render pages. If you're using
MediaWiki with prettified URLs, this works great:

User-agent: *
Allow: /mw/images/
Allow: /mw/skins/
Allow: /mw/title.png
Disallow: /w/
Disallow: /mw/
Disallow: /wiki/Special:

-- 
:wq



Re: [gentoo-user] Google privacy changes

2012-02-08 Thread Michael Mol
On Wed, Feb 8, 2012 at 12:17 PM, Pandu Poluan pa...@poluan.info wrote:

 On Feb 8, 2012 10:57 PM, Michael Mol mike...@gmail.com wrote:

 On Wed, Feb 8, 2012 at 10:46 AM, Paul Hartman
 paul.hartman+gen...@gmail.com wrote:
  On Wed, Feb 8, 2012 at 2:55 AM, Pandu Poluan pa...@poluan.info wrote:
 
  On Jan 27, 2012 11:18 PM, Paul Hartman
  paul.hartman+gen...@gmail.com
  wrote:
 
 
   8 snippage
 
 
  BTW, the Baidu spider hits my site more than all of the others
  combined...
 
 
  Somewhat anecdotal, and definitely veering way off-topic, but Baidu was
  the
  reason why my company decided to change our webhosting company: Its
  spidering brought our previous webhosting to its knees...
 
  Rgds,
 
  I wonder if Baidu crawler honors the Crawl-delay directive in
  robots.txt?
 
  Or I wonder if Baidu cralwer IPs need to be covered by firewall tarpit
  rules. ;)

 I don't remember if it respects Crawl-Delay, but it respects forbidden
 paths, etc. I've never been DDOS'd by Baidu crawlers, but I did get
 DDOS'd by Yahoo a number of times. Turned out the solution was to
 disallow access to expensive-to-render pages. If you're using
 MediaWiki with prettified URLs, this works great:

 User-agent: *
 Allow: /mw/images/
 Allow: /mw/skins/
 Allow: /mw/title.png
 Disallow: /w/
 Disallow: /mw/
 Disallow: /wiki/Special:


 *slaps forehead*

 Now why didn't I think of that before?!

 Thanks for reminding me!

I didn't think of it until I watched the logs live and saw it crawling
through page histories during one of the events. MediaWiki stores page
histories as a series of diffs from the current version, so it has to
assemble old versions by reverse-applying the diffs of all the made to
the page between the current version and the version you're asking
for. if you have a bot retrieve ten versions of a page that has ten
revisions, that's 210 reverse diff operations. Grabbing all versions
of a page with 20 revisions would result in over 1500 reverse diffs.
My 'hello world' page has over five hundred revisions.

So the page history crawling was pretty quickly obvious...

-- 
:wq



Re: [gentoo-user] HEADS UP - postfix-2.9.0 is broken

2012-02-06 Thread Michael Mol
On Mon, Feb 6, 2012 at 12:29 PM, Stefan G. Weichinger li...@xunil.at wrote:
 Am 06.02.2012 18:08, schrieb Volker Armin Hemmann:
 Am Montag, 6. Februar 2012, 12:06:41 schrieb Helmut Jarausch:
 Hi,

 beware of installing postfix-2.9.0
 When started it tries to access /usr/lib/postfix but files have been
 installed into /usr/libexec/postfix


 postfix 2.9.0 works fine for me.

 But I also run cfg-update after the update and made sure to merge the ._ file
 correctly.

 Yep, same here for postfix 2.9.0-r1 now.

 Had to kill the master-process manually ... this sorted things out in
 the end ...

So, to avoid this, when upgrading from  2.9.0-r1 to 2.9.0-r1, one
should stop postfix, do the upgrade, and then start postfix again?
that's how I read it.

Alternately, copy the old init script prior to upgrade, use that
script to bring postfix down, and use the new script to bring postfix
up?


-- 
:wq



Re: [gentoo-user] HEADS UP - postfix-2.9.0 is broken

2012-02-06 Thread Michael Mol
On Mon, Feb 6, 2012 at 1:00 PM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:
 Am Montag, 6. Februar 2012, 12:47:45 schrieb Michael Mol:
 On Mon, Feb 6, 2012 at 12:29 PM, Stefan G. Weichinger li...@xunil.at
 wrote:
  Am 06.02.2012 18:08, schrieb Volker Armin Hemmann:
  Am Montag, 6. Februar 2012, 12:06:41 schrieb Helmut Jarausch:
  Hi,
 
  beware of installing postfix-2.9.0
  When started it tries to access /usr/lib/postfix but files have been
  installed into /usr/libexec/postfix
 
  postfix 2.9.0 works fine for me.
 
  But I also run cfg-update after the update and made sure to merge the ._
  file correctly.
 
  Yep, same here for postfix 2.9.0-r1 now.
 
  Had to kill the master-process manually ... this sorted things out in
  the end ...

 So, to avoid this, when upgrading from  2.9.0-r1 to 2.9.0-r1, one
 should stop postfix, do the upgrade, and then start postfix again?
 that's how I read it.

 Alternately, copy the old init script prior to upgrade, use that
 script to bring postfix down, and use the new script to bring postfix
 up?

 I just upgraded and did /etc/init.d/postfix restart and it worked.

You reported success with 2.9.0. People are reporting difficulties
with 2.9.0-r1. Two different versions, or so I assumed.

-- 
:wq



Re: [gentoo-user] hwclock -- sysclock and the ntp-client

2012-02-06 Thread Michael Mol
On Mon, Feb 6, 2012 at 12:51 PM,  meino.cra...@gmx.de wrote:
 Hi,

 to get the correct system time I use ntp-client in the boot process.
 Furthermore in /etc/conf.d/hwclock I set:

    # Set CLOCK to UTC if your Hardware Clock is set to UTC (also known as
    # Greenwich Mean Time).  If that clock is set to the local time, then
    # set CLOCK to local.  Note that if you dual boot with Windows, then
    # you should set it to local.
    clock=UTC

    # If you want to set the Hardware Clock to the current System Time
    # (software clock) during shutdown, then say YES here.
    # You normally don't need to do this if you run a ntp daemon.
    clock_systohc=YES

    # If you want to set the system time to the current hardware clock
    # during bootup, then say YES here. You do not need this if you are
    # running a modern kernel with CONFIG_RTC_HCTOSYS set to y.
    # Also, be aware that if you set this to NO, the system time will
    # never be saved to the hardware clock unless you set
    # clock_systohc=YES above.
    clock_hctosys=NO

    # If you wish to pass any other arguments to hwclock during bootup,
    # you may do so here. Alpha users may wish to use --arc or --srm here.
    clock_args=

 In the kernel config file I had set:

    CONFIG_RTC_HCTOSYS=y
    CONFIG_RTC_HCTOSYS_DEVICE=rtc0

 I would exspect that after a reboot of the system which system time is
 correctly set via ntp-client that the hwclock and system time only
 differ in a small amount of time.

 But:
 solfire:/home/mccramerhwclock
 Mon Feb  6 19:05:11 2012  -0.172569 seconds
 solfire:/home/mccramerdate
 Mon Feb  6 18:49:37 CET 2012
 solfire:/home/mccramer

I don't know the CET tz, but I can see that the minutes don't match
up. I assume you rand the two commands within seconds of each other.
Is this true immediately after bootup, or does it take a while to get
that far off? It could be that your hardware clock is drifting, and
the system won't reset it until it goes to shutdown.

-- 
:wq



Re: [gentoo-user] hwclock -- sysclock and the ntp-client

2012-02-06 Thread Michael Mol
On Mon, Feb 6, 2012 at 1:39 PM,  meino.cra...@gmx.de wrote:
 Michael Mol mike...@gmail.com [12-02-06 19:20]:
 On Mon, Feb 6, 2012 at 12:51 PM,  meino.cra...@gmx.de wrote:
  Hi,
 
  to get the correct system time I use ntp-client in the boot process.
  Furthermore in /etc/conf.d/hwclock I set:
 
     # Set CLOCK to UTC if your Hardware Clock is set to UTC (also known as
     # Greenwich Mean Time).  If that clock is set to the local time, then
     # set CLOCK to local.  Note that if you dual boot with Windows, then
     # you should set it to local.
     clock=UTC
 
     # If you want to set the Hardware Clock to the current System Time
     # (software clock) during shutdown, then say YES here.
     # You normally don't need to do this if you run a ntp daemon.
     clock_systohc=YES
 
     # If you want to set the system time to the current hardware clock
     # during bootup, then say YES here. You do not need this if you are
     # running a modern kernel with CONFIG_RTC_HCTOSYS set to y.
     # Also, be aware that if you set this to NO, the system time will
     # never be saved to the hardware clock unless you set
     # clock_systohc=YES above.
     clock_hctosys=NO
 
     # If you wish to pass any other arguments to hwclock during bootup,
     # you may do so here. Alpha users may wish to use --arc or --srm here.
     clock_args=
 
  In the kernel config file I had set:
 
     CONFIG_RTC_HCTOSYS=y
     CONFIG_RTC_HCTOSYS_DEVICE=rtc0
 
  I would exspect that after a reboot of the system which system time is
  correctly set via ntp-client that the hwclock and system time only
  differ in a small amount of time.
 
  But:
  solfire:/home/mccramerhwclock
  Mon Feb  6 19:05:11 2012  -0.172569 seconds
  solfire:/home/mccramerdate
  Mon Feb  6 18:49:37 CET 2012
  solfire:/home/mccramer

 I don't know the CET tz, but I can see that the minutes don't match
 up. I assume you rand the two commands within seconds of each other.
 Is this true immediately after bootup, or does it take a while to get
 that far off? It could be that your hardware clock is drifting, and
 the system won't reset it until it goes to shutdown.

 --
 :wq


 Hi Michael,
 thank you for your reply.
 I set the configuration as mentioned above and booted twice with about
 five minutes wait.
 The commands were executed within seconds, yes.
 All hardware clocks drifts, but this is not the problem.
 The problem is that the hardware clock is not set to the system time
 in contradiction to what I think the comments in the config are
 saying.

 How can I fix that?

I don't really know. Are you sure that rtc0 corresponds to your
hardware clock device? Does setting clock_hctosys to YES have any
effect?

Is this in some kind of virtual-machine or hypervised environment
where something may be blocking the OS from setting the hardware
clock?

-- 
:wq



Re: [gentoo-user] hwclock -- sysclock and the ntp-client

2012-02-06 Thread Michael Mol
On Mon, Feb 6, 2012 at 1:46 PM, Dale rdalek1...@gmail.com wrote:
 meino.cra...@gmx.de wrote:
 Hi,

 to get the correct system time I use ntp-client in the boot process.
 Furthermore in /etc/conf.d/hwclock I set:

     # Set CLOCK to UTC if your Hardware Clock is set to UTC (also known as
     # Greenwich Mean Time).  If that clock is set to the local time, then
     # set CLOCK to local.  Note that if you dual boot with Windows, then
     # you should set it to local.
     clock=UTC

     # If you want to set the Hardware Clock to the current System Time
     # (software clock) during shutdown, then say YES here.
     # You normally don't need to do this if you run a ntp daemon.
     clock_systohc=YES

     # If you want to set the system time to the current hardware clock
     # during bootup, then say YES here. You do not need this if you are
     # running a modern kernel with CONFIG_RTC_HCTOSYS set to y.
     # Also, be aware that if you set this to NO, the system time will
     # never be saved to the hardware clock unless you set
     # clock_systohc=YES above.
     clock_hctosys=NO

     # If you wish to pass any other arguments to hwclock during bootup,
     # you may do so here. Alpha users may wish to use --arc or --srm here.
     clock_args=

 In the kernel config file I had set:

     CONFIG_RTC_HCTOSYS=y
     CONFIG_RTC_HCTOSYS_DEVICE=rtc0

 I would exspect that after a reboot of the system which system time is
 correctly set via ntp-client that the hwclock and system time only
 differ in a small amount of time.

 But:
 solfire:/home/mccramerhwclock
 Mon Feb  6 19:05:11 2012  -0.172569 seconds
 solfire:/home/mccramerdate
 Mon Feb  6 18:49:37 CET 2012
 solfire:/home/mccramer

 Is there anything else which I have to tweak to acchieve what I want?

 Thank you very much in advance for any help!

 Best regards,
 mcc

 PS: I need a correct hwclock since I want to wake the system via the
 hwclock.






 I ran into some issues when I rebooted and I had to set both
 clock_systohc  clock_hctosys to yes.  That worked for me at least.  One
 sets the BIOS at shutdown and the other loads from the BIOS when
 rebooting.

 Yours may need something else but if nothing else works, try that.

I think he's trying to depend on the kernel keeping the hw clock in
sync with the sw clock, and that part's not working for some reason.

It's a reasonable thing to desire, since an unplanned or ungraceful
shutdown could miss the sw-to-hw step.

-- 
:wq



Re: [gentoo-user] hwclock -- sysclock and the ntp-client

2012-02-06 Thread Michael Mol
On Feb 6, 2012 7:00 PM, meino.cra...@gmx.de wrote:

 Michael Mol mike...@gmail.com [12-02-06 19:56]:
  On Mon, Feb 6, 2012 at 1:39 PM,  meino.cra...@gmx.de wrote:
   Michael Mol mike...@gmail.com [12-02-06 19:20]:
   On Mon, Feb 6, 2012 at 12:51 PM,  meino.cra...@gmx.de wrote:
Hi,
   
to get the correct system time I use ntp-client in the boot
process.
Furthermore in /etc/conf.d/hwclock I set:
   
   # Set CLOCK to UTC if your Hardware Clock is set to UTC (also
known as
   # Greenwich Mean Time).  If that clock is set to the local
time, then
   # set CLOCK to local.  Note that if you dual boot with
Windows, then
   # you should set it to local.
   clock=UTC
   
   # If you want to set the Hardware Clock to the current System
Time
   # (software clock) during shutdown, then say YES here.
   # You normally don't need to do this if you run a ntp daemon.
   clock_systohc=YES
   
   # If you want to set the system time to the current hardware
clock
   # during bootup, then say YES here. You do not need this if
you are
   # running a modern kernel with CONFIG_RTC_HCTOSYS set to y.
   # Also, be aware that if you set this to NO, the system time
will
   # never be saved to the hardware clock unless you set
   # clock_systohc=YES above.
   clock_hctosys=NO
   
   # If you wish to pass any other arguments to hwclock during
bootup,
   # you may do so here. Alpha users may wish to use --arc or
--srm here.
   clock_args=
   
In the kernel config file I had set:
   
   CONFIG_RTC_HCTOSYS=y
   CONFIG_RTC_HCTOSYS_DEVICE=rtc0
   
I would exspect that after a reboot of the system which system
time is
correctly set via ntp-client that the hwclock and system time only
differ in a small amount of time.
   
But:
solfire:/home/mccramerhwclock
Mon Feb  6 19:05:11 2012  -0.172569 seconds
solfire:/home/mccramerdate
Mon Feb  6 18:49:37 CET 2012
solfire:/home/mccramer
  
   I don't know the CET tz, but I can see that the minutes don't match
   up. I assume you rand the two commands within seconds of each other.
   Is this true immediately after bootup, or does it take a while to get
   that far off? It could be that your hardware clock is drifting, and
   the system won't reset it until it goes to shutdown.
  
   --
   :wq
  
  
   Hi Michael,
   thank you for your reply.
   I set the configuration as mentioned above and booted twice with about
   five minutes wait.
   The commands were executed within seconds, yes.
   All hardware clocks drifts, but this is not the problem.
   The problem is that the hardware clock is not set to the system time
   in contradiction to what I think the comments in the config are
   saying.
  
   How can I fix that?
 
  I don't really know. Are you sure that rtc0 corresponds to your
  hardware clock device? Does setting clock_hctosys to YES have any
  effect?
 
  Is this in some kind of virtual-machine or hypervised environment
  where something may be blocking the OS from setting the hardware
  clock?
 
  --
  :wq
 

 It is set

 lrwxrwxrwx 1 root root   4 2012-02-07 00:52 /dev/rtc - rtc0
 crwxrwx--- 1 root audio 254, 0 2012-02-07 00:52 /dev/rtc0

 and it is the only device of its kind.

 As I wrote I am using ntp_client for setting my system time while
 booting up.
 So reagrdless wheter I am setting clock_hctosys I am alway getting
 the correct system time later in the bootprocess via ntp.

Sure. My question was more geared toward specifically obtaining information
about your hardware clock, which you probed with the hwclock command. You
insist your hardware clock isn't being updated, but I have insufficient
data to verify that, or even get a feel for the behavior of your system.
*Obviously* all hardware clocks drift, but I was wondering if your hardware
clock was drifting notably rapidly. I don't know at what interval the
hardware clock would normally be updated by the kernel, or what constitutes
'continuous' in that context.

Timestamps along with the commands and for notable events, such as when
ntpclient ran during bootup and when the system shut down, would be useful.


Re: [gentoo-user] Warning about old init scripts when updating dev-db/mysql-init-scripts-2.0_pre1-r2

2012-02-05 Thread Michael Mol
Probably etc-update.

On Sun, Feb 5, 2012 at 12:00 PM, Tanstaafl tansta...@libertytrek.org wrote:
 Hello,

 Also have this to deal with...

 In the emerge post install I get:

 WARN: postinst
 Old /etc/init.d/mysql and /etc/conf.d/mysql still present!
 Update both of those files to the new versions!

 But it doesn't say anything about *how* to update them...

 Is this documented anywhere? Or is this just evidence that I'm clueless
 (meaning, I should 'just know' what needs to be done)?




-- 
:wq



Re: [gentoo-user] distcc - amd64 and x86

2012-02-05 Thread Michael Mol
On Sun, Feb 5, 2012 at 12:04 PM, Samuraiii samura...@volny.cz wrote:
 Hello,

 I have (right now) 3 computers runing Gentoo and as two of them are with
 only 1GB of ram - which for libreoffice compiling is not enough.

 So my questions are:

 1) Is it possible with distcc overcome this memory limitation? -call compile
 on weak machine and leave memory load on strong one

To a limited extent, yes. You could configure things such that
compiles happen remotely, but links always have to happen locally. And
anything not done with a C or C++ compiler happens locally.

AFAICT, any C or C++ app's most memory-consumptive act is linking.

Your better bet is probably going to be to add swap.


 2) second question is about arch of each computer
   one is Core2Duo second Athlon 64 X2 and last is Pentium M
   What steps I need to take to exploit distcc for all given machines
   Do I need specific toolchains for each arch in question?

Yes. But I don't know the routine for this. I'll be watching this
thread myself, as I'd love to get some distcc crosscompile magic going
at home.


 3) How is distcc prone to network failures?

If a dispatched compile fails (either because of a failure on the
remote end, or because of some kind of network glitch), the compile is
tried again locally.

If a remote distccd can't be reached, the local dispatcher won't try
again for a configurable amount of time. By default, that's one
minute.

-- 
:wq



<    3   4   5   6   7   8   9   10   11   12   >