Re: [gentoo-user] KDE 4.10 and plasma crashing

2013-02-09 Thread Neil Bothwick
Dale rdalek1...@gmail.com wrote:

 Frank Steinmetzger wrote:
  Am 08.02.2013 23:54, schrieb Neil Bothwick:
  On Fri, 08 Feb 2013 16:45:13 -0600, Dale wrote:
 
  Well, switched to a newer gcc, same thing.  Going back to KDE 4.9
 for a
  bit.  They will have it fixed in a couple days.  After all, Linux
 has
  some of the smartest programmers there is.  I'm not sure of some
 users
  tho, myself included.  ;-) 
  Changing CFLAGS and rebuilding one package is a lot less work that
 a
  complete downgrade.
  Only if you don't have a backup (which I did before upgrading, but I
 can't be
  bothered with redoing the upgrade in only a few days, so I'll sit it
 out with
  Awesome in the meantime). Thankfully, my netbook, which needs very
 many an hour
  to build KDE, runs x86 and isn't affected.
 
 Yep.  I rm'd the kde.keyword file and did a emerge -kuv world and down
 went KDE.  Took about 5 or 10 minutes.  I cooked and ate supper while
 the drive light blinked.  
 
 Back to normal now. 
 
 Thanks again.
 
 Dale
 
 :-)  :-) 
 
 -- 
 I am only responsible for what I said ... Not for what you understood
 or how you interpreted my words!

Of course I have backups, although a quick run of demerge got it back for me. 
But rolling back KDE, with the attendant hassles of messed up configs, only 
sidesteps the bug , which relates to qr-core not KDE.

At least it prompted me to, belatedley, set up snapshots on my new ZFS setup, 
so some good came of it.
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Re: [gentoo-user] neon/davfs2/cadaver failing with openssl-1.0.1c

2013-02-09 Thread Arnaud Desgranges
Have you tried revdep-rebuild -L libssl.so.1.0.0 ?

- Mail original -
De: William Kenworthy bi...@iinet.net.au
À: gentoo-user@lists.gentoo.org
Envoyé: Samedi 9 Février 2013 01:33:42
Objet: [gentoo-user] neon/davfs2/cadaver failing with openssl-1.0.1c

I have been using dev-libs/openssl-0.9.8x with neon/davfs2 to mount a
share from blackboard, but after upgrading to 1.01c it no longer works.
 Ive been rebuilding and cleaning up the libs but no luck so far.  The
problem is limited (as far as I can see) to neon and the autofallback to
tlsv1.0 failing (server will only do that version).

openssl-1.0.1c also installs libraries numbered like
/usr/lib/libssl.so.1.0.0 - its probably not related though one of two
failing systems did have a 1.0.0 version installed at one stage.

Has this been seen before?

BillK




Re: [gentoo-user] Re: multiple installs

2013-02-09 Thread Adam Carter
You could install to one of the Athlons, then copy it to the other two

 Athlons and one of the FX machines.  Then, reconfigure the FX install
 for more CPUs and native -march, recompile world and copy it to the
 other two FX machines.


Even that might not be worth it. Unless you're using something that
benefits from the new instructions in the FXes, or requires the last .1%
performance, a single install on an Athlon and copy onto the remaining 5
systems will give you more time for something else. From a support point of
view the consistency can be a win. Just remember to specify the Althlon64
CPU type in the kernel config, but with 8 cores max.

For the hardware, just select a superset of the two systems requirements in
your kernel config. If you build as modules any superfluous code wont even
be loaded.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Adam Carter
 There are several things you can do to improve the state of things.
 The first and foremost is to add caching in front of the server, using
 an accelerator proxy. (i.e. squid running in accelerator mode.) In
 this way, you have a program which receives the user's request, checks
 to see if it's a request that it already has a response for, checks
 whether that response is still valid, and then checks to see whether
 or not it's permitted to respond on the server's behalf...almost
 entirely without bothering the main web server. This process is far,
 far, far faster than having the request hit the serving application's
 main code.



I was under the impression that Apache coded sensibly enough to handle
incoming requests as least as well as Squid would. Agree with everything
else tho.

OP should look into what's required on the back end to process those 6
requests, as it superficially appears that a very small number of requests
is generating a huge amount of work, and that means the site would be easy
to DoS.


Re: [gentoo-user] neon/davfs2/cadaver failing with openssl-1.0.1c

2013-02-09 Thread William Kenworthy
Many times! (and the list includes libreoffice so its painfully long in
build time :(

I had one machine working still using 9.8x so I upgraded that ... and it
broke too.  ok, so I thought, go back to 9.8 and remove 1.0.1c ... and
found that its now slotted, and is just a stub so it installs libs but
no headers etc ... useless when you want to downgrade :(

There is an entry in the changelog saying 1.0.1c suffers breakage (but
not what) so I went ~x86 and its rebuilding now, but still doesn't work.

Cant believe I am the only one suffering this, unless its just some
servers like the blackboard ones :(

BillK


On 09/02/13 17:47, Arnaud Desgranges wrote:
 Have you tried revdep-rebuild -L libssl.so.1.0.0 ?
 
 - Mail original -
 De: William Kenworthy bi...@iinet.net.au
 À: gentoo-user@lists.gentoo.org
 Envoyé: Samedi 9 Février 2013 01:33:42
 Objet: [gentoo-user] neon/davfs2/cadaver failing with openssl-1.0.1c
 
 I have been using dev-libs/openssl-0.9.8x with neon/davfs2 to mount a
 share from blackboard, but after upgrading to 1.01c it no longer works.
  Ive been rebuilding and cleaning up the libs but no luck so far.  The
 problem is limited (as far as I can see) to neon and the autofallback to
 tlsv1.0 failing (server will only do that version).
 
 openssl-1.0.1c also installs libraries numbered like
 /usr/lib/libssl.so.1.0.0 - its probably not related though one of two
 failing systems did have a 1.0.0 version installed at one stage.
 
 Has this been seen before?
 
 BillK
 
 




Re: [gentoo-user] KDE 4.10 and plasma crashing

2013-02-09 Thread Nilesh Govindrajan
On Sat, Feb 9, 2013 at 9:58 AM, Nilesh Govindrajan m...@nileshgr.com wrote:

 Dale, which version of GCC are you using? I'm on 4.7.2, updating KDE
 now, will report back after a few hours when it's done.

 --
 Nilesh Govindarajan
 http://nileshgr.com

Those few hours took almost the whole day, meh! Thanks to chromium.
Anyways, it's working without any hassles for me.

-- 
Nilesh Govindrajan
http://nileshgr.com



[gentoo-user] Getting back new config files

2013-02-09 Thread Nilesh Govindrajan
Hi,

Quite a lot of times there are config updates to be performed using
dispatch-conf.
Suppose I discard a config by mistake while updating one but I know
which package it belongs to.
How do I get the new configuration back? Remerging the package doesn't
seem to solve the problem.

I'm using paludis.

-- 
Nilesh Govindrajan
http://nileshgr.com



Re: [gentoo-user] Getting back new config files

2013-02-09 Thread Neil Bothwick
On Sat, 9 Feb 2013 18:13:18 +0530, Nilesh Govindrajan wrote:

 Quite a lot of times there are config updates to be performed using
 dispatch-conf.
 Suppose I discard a config by mistake while updating one but I know
 which package it belongs to.
 How do I get the new configuration back? Remerging the package doesn't
 seem to solve the problem.
 
 I'm using paludis.

emerge has the --noconfmem to deal with this. I don't use paludis but
expect it has something similar.


-- 
Neil Bothwick

This message has been cruelly tested on sweet little furry animals.


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Michael Mol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/09/2013 05:36 AM, Adam Carter wrote:
 
 There are several things you can do to improve the state of
 things. The first and foremost is to add caching in front of the
 server, using an accelerator proxy. (i.e. squid running in
 accelerator mode.) In this way, you have a program which receives
 the user's request, checks to see if it's a request that it already
 has a response for, checks whether that response is still valid,
 and then checks to see whether or not it's permitted to respond on
 the server's behalf...almost entirely without bothering the main
 web server. This process is far, far, far faster than having the
 request hit the serving application's main code.
 
 
 
 I was under the impression that Apache coded sensibly enough to
 handle incoming requests as least as well as Squid would. Agree
 with everything else tho.

Sure, so long as Apache doesn't have any additional modules loaded. If
it's got something like mod_php loaded (extraordinarily common),
mod_perl or mod_python (less common, now) then the init time of
mod_php gets added to the init time for every request handler.


 OP should look into what's required on the back end to process
 those 6 requests, as it superficially appears that a very small
 number of requests is generating a huge amount of work, and that
 means the site would be easy to DoS.

Absolutely, hence the steps I outlined to reduce or optimize backend
processing.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRFlRSAAoJED5TcEBdxYwQ7BwH/Aj3hgQgGjzBoQhlZqPKDzEW
pZJJVcVf4CF4sk88el8X/hPMfx2cTpuM53tLDsv3KGR1dwjP48O2oiiTubH/HRxI
lNR5I22QK2YEbLzeRTZN+pkpGnyA1W+d3kF7F9aiNXVUV8KyuyxSxx+7Xm1tRW/W
xcNhSLTQIpyTAx+R9MGNkJFs0gFGFgIMML4bfi5BpIrbeeVWsoe1C0syFF+HIFWP
WZRtsCFhdWrZkvKUYIBkoFq9VKkSTt13eIvrPjxFUVJwFSmntxSgfqiaZxfHXp5A
oSLtyz0vR6qByoivkuilNK7sI3fK8fHA0q4XF1AUaOuwcHg9AFG9pCFBUF2KOgk=
=R/kD
-END PGP SIGNATURE-



Re: [gentoo-user] KDE 4.10 and plasma crashing

2013-02-09 Thread Dale
Neil Bothwick wrote:
 Dale rdalek1...@gmail.com wrote:

 Frank Steinmetzger wrote:

 Am 08.02.2013 23:54, schrieb Neil Bothwick:

 On Fri, 08 Feb 2013 16:45:13 -0600, Dale wrote:

 Well, switched to a newer gcc, same thing. Going back
 to KDE 4.9 for a bit. They will have it fixed in a
 couple days. After all, Linux has some of the smartest
 programmers there is. I'm not sure of some users tho,
 myself included. ;-) 

 Changing CFLAGS and rebuilding one package is a lot less
 work that a complete downgrade. 

 Only if you don't have a backup (which I did before upgrading,
 but I can't be bothered with redoing the upgrade in only a few
 days, so I'll sit it out with Awesome in the meantime).
 Thankfully, my netbook, which needs very many an hour to build
 KDE, runs x86 and isn't affected.


 Yep.  I rm'd the kde.keyword file and did a emerge -kuv world and down
 went KDE.  Took about 5 or 10 minutes.  I cooked and ate supper while
 the drive light blinked.  

 Back to normal now. 

 Thanks again.

 Dale

 :-)  :-) 


 Of course I have backups, although a quick run of demerge got it back
 for me. But rolling back KDE, with the attendant hassles of messed up
 configs, only sidesteps the bug , which relates to qr-core not KDE.

 At least it prompted me to, belatedley, set up snapshots on my new ZFS
 setup, so some good came of it.
 -- 
 Sent from my Android phone with K-9 Mail. Please excuse my brevity. 

I didn't have any config problems at all.  I did have to update but its
the same thing I have to do when I do a upgrade.  It may only be a qt
problem but it sure did bork KDE up badly.  KDE was practically dead in
the water.  It reminded me of a ship that got hit by a large torpedo. 
It was not sinking, it was sunk.  Almost nothing KDE worked.  No kicker,
no wallpaper for the desktop, no KDE menu, no right clicking on the
desktop.  Also, no way to log out either.  The only way to logout was to
restart xdm.  Yea, no matter what caused it, for ME at least,
downgrading was the easiest and fastest. 

YMMV tho. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!



Re: [gentoo-user] Re: Bug in spidermonkey?

2013-02-09 Thread Elias Diem
Hi Walt

On 2013-02-08,  walt wrote:

 I just built 1.8.5-r1 on my ~amd64 machine without errors, and I have 1.8.5-r4
 already installed.  Can you unmask and compile r4 just as a test?

I tried the r4 version but it failed the same way.

 BTW, in your build log I see waiting for unfinished jobs, so it would be 
 worth
 trying -j1 instead of -j4.  Your stable build tools are no doubt very 
 different
 from the versions I'm running on ~amd64.  Just another random thing to try in
 a puzzling situation...

I also (additionally) set the MAKEOPTS to -j1 and it failed 
as well.

I guess I'm going to reinstall the whole system.

-- 
Greetings
Elias





[gentoo-user] Re: multiple installs

2013-02-09 Thread walt
On 02/08/2013 11:46 AM, James wrote:
 Hello,
 
 I have quit a few amd64 systems to install, hopefully
 mostly unattended. I'm looking for a way to install quick and simple
 workstations running kde. All will have (boot, root and swap partitions only).
 They can be updated to current, individually. later.
 
 http://en.gentoo-wiki.com/wiki/Install_LiveDVD_11.2_to_hard_disk_drive
 
 So the best guide I have found is this one for the
 livedvd 11.2.  Any modifications for the
 livedvd-amd64-multilib-20121221.iso version?
 
 Grub-2, udev, and some other changes may make following
 this 11.2 guide, problematic?
 
 I wanted to do these installs with ZFS 

The only obvious problem I can see is that grub2 will need zfs support
if your /boot is going to be zfs.  I don't recall all of the details,
but at one point during the grub2 install you can tell it to pre-load
the zfs module (and any other modules you may want) during the install.





Re: [gentoo-user] Which CPU type to select?

2013-02-09 Thread Bruce Hill
On Fri, Feb 08, 2013 at 08:48:08PM -0500, Walter Dnes wrote:
   I'm installing Gentoo on a brand new toy I just got myself.  Here are
 the choices from make menuconfig.  Is Core 2/newer Xeon correct?
 
  ( ) Opteron/Athlon64/Hammer/K8
  ( ) Intel P4 / older Netburst based Xeon
  (X) Core 2/newer Xeon
  ( ) Intel Atom
  ( ) Generic-x86-64
 
   Here's the listing for one of the cores from /proc/cpuinfo
 
 vendor_id   : GenuineIntel
 cpu family  : 6
 model   : 37
 model name  : Intel(R) Core(TM) i7 CPU   M 620  @ 2.67GHz

You can also use this patch which adds support for modern (=gcc-4.6) CPU
options to the kernel:
http://www.linuxforge.net/linux/kernel/kernel-30rc5-gcc46-0.patch -- you can
get Intel Core i7 and Intel Core i7 AVX choices in your kernel.

Bruce
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.   

   
Q: Why is top-posting such a bad thing? 

   
A: Top-posting. 

   
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting



[gentoo-user] SSH UseDNS without IPv6?

2013-02-09 Thread Florian Philipp
Hi list!

I have an issue with SSH. It's a variation of the old Set 'UseDNS no'
to avoid delays with faulty DNS records theme.

Following setup:
1. I have a server with IPv6 compiled into the SSH daemon but no actual
IPv6 network interface.

2. The SSH client has no IPv6, neither compiled nor active.

3. The DNS server doesn't serve or support  records. Apparently it
drops all such requests. All other records for IP and reverse lookup are
correct.

Now I'm experiencing the classic, very long delay when connecting to the
server via SSH because it does DNS lookups. When I look at wireshark
dumps, I see correctly served A and reverse lookups but the server also
insists on doing  requests which time out.

I tried limiting the sshd AddressFamily to inet (aka IPv4) but this
didn't change anything. Is there another workaround or do I really have
to deactivate DNS lookups?

Thanks in advance!
Florian Philipp



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] systemd-197-r1 starts gdm-3.6.2

2013-02-09 Thread Stefan G. Weichinger

Next episode:

I also migrated my gentoo thinkpad to systemd today.

Generally very similar to my desktop ... ~amd64 with Gnome 3.6.

Things went pretty well, I have to say.

I can login to gdm here! ;-)

An issue I haven't solved yet: encrypted swap.

I always get timeouts as systemd waits for the decrypted mapper-device
to come up. Swap doesn't get enabled but when it finally continues to
boot I see the valid mapper-device there and can swapon it manually.

# cat /etc/crypttab

swap /dev/disk/by-id/ata-INTEL_SSDSA2M080G2GC_CVPO015404LR080JGN-part5
/dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256

# grep swap /etc/fstab
/dev/mapper/swapnoneswapdefaults0 0

AFAI understand these 2 lines should be enough to let systemd generate
its relevant unit-files etc.

Right?

Best regards, have a nice weekend, Stefan



Re: [gentoo-user] systemd-197-r1 starts gdm-3.6.2

2013-02-09 Thread Stefan G. Weichinger
Am 2013-02-09 19:56, schrieb Stefan G. Weichinger:

 AFAI understand these 2 lines should be enough to let systemd generate
 its relevant unit-files etc.
 
 Right?

Additional thoughts:

Is pam_mount obsolete with systemd?

It is possible to mount my /home via systemd-unit as well ... the
difference seems to be that systemd would (try to) mount it at boot-time
while with pam_mount it would be mounted at login.

Thoughts? Experiences?

Stefan



Re: [gentoo-user] SSH UseDNS without IPv6?

2013-02-09 Thread Alan McKinnon
On 09/02/2013 20:22, Florian Philipp wrote:
 Hi list!
 
 I have an issue with SSH. It's a variation of the old Set 'UseDNS no'
 to avoid delays with faulty DNS records theme.
 
 Following setup:
 1. I have a server with IPv6 compiled into the SSH daemon but no actual
 IPv6 network interface.
 
 2. The SSH client has no IPv6, neither compiled nor active.
 
 3. The DNS server doesn't serve or support  records. Apparently it
 drops all such requests. All other records for IP and reverse lookup are
 correct.
 
 Now I'm experiencing the classic, very long delay when connecting to the
 server via SSH because it does DNS lookups. When I look at wireshark
 dumps, I see correctly served A and reverse lookups but the server also
 insists on doing  requests which time out.

When you say the server also insists on doing  requests you mean
the SSH server, right?

 
 I tried limiting the sshd AddressFamily to inet (aka IPv4) but this
 didn't change anything. Is there another workaround or do I really have
 to deactivate DNS lookups?

Is the server Gentoo and do you really need IPv6 support on it? Did you
consider rebuilding that host with IPv6 disabled in USE?

IPv6 coexisting with IPv4 is always going to be a tricky problem, and
the recommended defaults you run into all over are usually intended to
force people to hurry IPv6 implementation along :-)

There's always a way to change defaults, and I found this:

http://askubuntu.com/questions/32298/prefer-a-ipv4-dns-lookups-before-ipv6-lookups

The magic file you need to edit appears to be

/etc/gai.conf

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Creating accounts in Thunderbird

2013-02-09 Thread Alan McKinnon
On 07/02/2013 23:07, Tanstaafl wrote:
 Which is silly, as username+hostname is not guaranteed to be a
 singleton in any universe.
 
 ? I can't think of any way that username+incoming-hostname can result in
 anything other than a single, individual users account, so I guess I'm
 totally missing what you are saying.

 it
A few examples off the top of my head:


1. Two imap servers on the same host running on different ports and no
reason why a user can't have accounts on both servers
2. port forwarding on localhost to a variety of impa servers somewhere
else (port forwarding gets around corporate firewall rules that
Thunderbird can't deal with)
3. Because I can and there's no legitimate reason for a mail client to
get in my way
4. Corporate sysadmins like me use tricks like this all the time to a)
fix real problems b) comply with frantic business requests c) stay
within budget d) get around stupid rules proclaimed by idiot managers
with single figure IQs

There are more valid reasons why this setup can occur and I have a lack
of mentions in RFCs to prove it.
There are no valid reasons for a mail client to get in my way like this
and I have a lack of RFC mentions that allow it to prove

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Re: multiple installs

2013-02-09 Thread Alecks Gates
On Sat, 2013-02-09 at 21:20 +1100, Adam Carter wrote:
 You could install to one of the Athlons, then copy it to the other two
 Athlons and one of the FX machines.  Then, reconfigure the FX
 install
 for more CPUs and native -march, recompile world and copy it
 to the
 other two FX machines.
 
 
 Even that might not be worth it. Unless you're using something that
 benefits from the new instructions in the FXes, or requires the
 last .1% performance, a single install on an Athlon and copy onto the
 remaining 5 systems will give you more time for something else. From a
 support point of view the consistency can be a win. Just remember to
 specify the Althlon64 CPU type in the kernel config, but with 8 cores
 max.
 
 
 For the hardware, just select a superset of the two systems
 requirements in your kernel config. If you build as modules any
 superfluous code wont even be loaded. 
 
That might even depend on the compiler version anyway.  But, I agree.
Building with -march=athlon (I think it's athlon?) on one FX machine and
copying to the rest would be fastest.
-- 
Alecks Gates aleck...@gmail.com




Re: [gentoo-user] systemd-197-r1 starts gdm-3.6.2

2013-02-09 Thread Canek Peláez Valdés
On Sat, Feb 9, 2013 at 12:56 PM, Stefan G. Weichinger li...@xunil.at wrote:

 Next episode:

 I also migrated my gentoo thinkpad to systemd today.

Cool.

 Generally very similar to my desktop ... ~amd64 with Gnome 3.6.

 Things went pretty well, I have to say.

 I can login to gdm here! ;-)

Try to list the differences between your laptop and your desktop
(world files, USE flags, partition schemes, etc.) That was my approach
when you described your problem to me; try to see what it
differentiates from mine, but we never got too far with tat.

 An issue I haven't solved yet: encrypted swap.

 I always get timeouts as systemd waits for the decrypted mapper-device
 to come up. Swap doesn't get enabled but when it finally continues to
 boot I see the valid mapper-device there and can swapon it manually.

 # cat /etc/crypttab

 swap /dev/disk/by-id/ata-INTEL_SSDSA2M080G2GC_CVPO015404LR080JGN-part5
 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256

 # grep swap /etc/fstab
 /dev/mapper/swapnoneswapdefaults0 0

 AFAI understand these 2 lines should be enough to let systemd generate
 its relevant unit-files etc.

 Right?

I haven't used an encrypted swap (nor partition), but I believe that's
all you need. A workaround perhaps is to put the nofail option, which
at least will skip the partition when booting.

Regards.
-- 
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México



Re: [gentoo-user] systemd-197-r1 starts gdm-3.6.2

2013-02-09 Thread Canek Peláez Valdés
On Sat, Feb 9, 2013 at 1:44 PM, Stefan G. Weichinger li...@xunil.at wrote:
 Am 2013-02-09 19:56, schrieb Stefan G. Weichinger:

 AFAI understand these 2 lines should be enough to let systemd generate
 its relevant unit-files etc.

 Right?

 Additional thoughts:

 Is pam_mount obsolete with systemd?

I don't know if obsolete is the correct definition, but it is not
installed in any of my systems.

 It is possible to mount my /home via systemd-unit as well ... the
 difference seems to be that systemd would (try to) mount it at boot-time
 while with pam_mount it would be mounted at login.

You can mount almost all partitions with system units; there was a
discussion some days ago about getting rid of /etc/fstab for the
embedded case and stuff like that. Also, you can set the .mount unit
for your $HOME, and make the gdm service depend on it (it would be
mounted at gdm startup, not at session startup, though).

 Thoughts? Experiences?

I have never used pam_mount; what's the upside? Just delaying the
mounting (and perhaps fsck'ing) of the partition until session login?

Regards.
-- 
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Adam Carter
Sure, so long as Apache doesn't have any additional modules loaded. If

 it's got something like mod_php loaded (extraordinarily common),
 mod_perl or mod_python (less common, now) then the init time of
 mod_php gets added to the init time for every request handler.


Interesting, so if you have to use mod_php you'd probably be better off
running Worker than Prefork, and you'd want to keep MaxConnectionsPerChild
on the higher side, to reduce init work you've mentioned, right? May also
help to verify that KeepAlive is on and tweak MaxKeepAliveRequests a little
higher.


Re: [gentoo-user] Re: multiple installs

2013-02-09 Thread Adam Carter

 That might even depend on the compiler version anyway.  But, I agree.
 Building with -march=athlon (I think it's athlon?) on one FX machine and
 copying to the rest would be fastest.


Good call, I missed that. Of course it makes much more sense to build on a
fast FX box with -j8, but have CFLAGs for the X2, which according to the
safe cflags wiki (
http://en.gentoo-wiki.com/wiki/Safe_Cflags/AMD#Athlon_64_X2) is:

CHOST=x86_64-pc-linux-gnu
CFLAGS=-march=k8 -O2 -pipe
CXXFLAGS=${CFLAGS}


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Michael Mol
On Feb 9, 2013 9:26 PM, Adam Carter adamcart...@gmail.com wrote:

 Sure, so long as Apache doesn't have any additional modules loaded. If

 it's got something like mod_php loaded (extraordinarily common),
 mod_perl or mod_python (less common, now) then the init time of
 mod_php gets added to the init time for every request handler.


 Interesting, so if you have to use mod_php you'd probably be better off
running Worker than Prefork, and you'd want to keep MaxConnectionsPerChild
on the higher side, to reduce init work you've mentioned, right? May also
help to verify that KeepAlive is on and tweak MaxKeepAliveRequests a little
higher.

Can't; mod_php isn't compatible with mpm_worker. You have to use a
single-threaded mpm like prefork or itk.

Anyway, you're starting to get the idea why you want a caching proxy in
front of apache.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Adam Carter

 Can't; mod_php isn't compatible with mpm_worker. You have to use a
 single-threaded mpm like prefork or itk.

 Anyway, you're starting to get the idea why you want a caching proxy in
 front of apache.

Indeed. Thanks for your comments.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Grant
 The responses all come back successfully within a few seconds.
 Can you give me a really general description of the sort of problem
 that could behave like this?

 Your server is just a single computer, running multiple processes.
 Each request from a user (be it you or someone else) requires a
 certain amount of resources while it's executing. If there aren't
 enough resources, some of the requests will have to wait until enough
 others have finished in order for the resources to be freed up.

Here's where I'm confused.  The requests are made via a browser and
the response is displayed in the browser.  There is no additional
processing besides the display of the response.  The responses are
received and displayed within about 3 seconds of when the requests are
made.  Shouldn't this mean that all processing related to these
transactions is completed within 3 seconds?  If so, I don't understand
why apache2 seems to bog down a bit for about 10 minutes afterward.

- Grant


 To really simplify things, let's say your server has a single CPU
 core, the queries made against it only require CPU consumption, not
 disk consumption, and the queries you're making require 3s of CPU time.

 If you make a query, the server will spend 3s thinking before it spits
 a result back to you. During this time, it can't think about anything
 else...if it does, the server will take as much longer to respond to
 you as it takes thinking about other things.

 Let's say you make two queries at the same time. Each requires 3s of
 CPU time, so you'll need a grand total of 6s to get all your results
 back. That's fine, you're expecting this.

 Now let's say you make a query, and someone else makes a query. Each
 query takes 3s of CPU time. Since the server has 6s worth of work to
 do, all the users will get their responses by the end of that 6s.
 Depending on how a variety of factors come into play, user A might see
 his query come back at the end of 3s, and user B might see his query
 come back at the end of 6s. Or it might be reversed. Or both users
 might not see their results until the end of that 6s. It's really not
 very predictable.

 The more queries you make, the more work you give the server. If the
 server has to spend a few seconds' worth of resources, that's a few
 seconds' worth of resources unavailable to other users. A few seconds
 for a query against a web server is actually a huge amount of time...a
 well-tuned application on a well-tuned webserver backed by a
 well-tuned database should probably respond to the query in under
 50ms! This is because there are often many, many users making queries,
 and each user tends to make many queries at the same time.

 There are several things you can do to improve the state of things.
 The first and foremost is to add caching in front of the server, using
 an accelerator proxy. (i.e. squid running in accelerator mode.) In
 this way, you have a program which receives the user's request, checks
 to see if it's a request that it already has a response for, checks
 whether that response is still valid, and then checks to see whether
 or not it's permitted to respond on the server's behalf...almost
 entirely without bothering the main web server. This process is far,
 far, far faster than having the request hit the serving application's
 main code.

 The second thing is to check the web server configuration itself. Does
 it have enough spare request handlers available? Does it have too
 many? If there's enough CPU and RAM left over to launch a few more
 request handlers when the server is under heavy load, it might be a
 good idea to allow it to do just that.

 The third thing to do is to tune the database itself. MySQL in
 particular ships with horrible default settings that typically limit
 its performance to far below the hardware you'd normally find it on.
 Tuning the database requires knowledge of how the database engine
 works. There's an entire profession dedicated to doing that right...

 The fourth thing to do is add caching to the application, using things
 like memcachedb. This may require modifying the application...though
 if the application has support already, then, well, great.

 If that's still not enough, there are more things you can do, but you
 should probably start considering throwing more hardware at the problem...



[gentoo-user] What to do with /var/run?

2013-02-09 Thread Grant
I received the following ELOG message after an emerge:

 * One or more symlinks to directories have been preserved in order to
 * ensure that files installed via these symlinks remain accessible. This
 * indicates that the mentioned symlink(s) may be obsolete remnants of an
 * old install, and it may be appropriate to replace a given symlink with
 * the directory that it points to.
 *
 * /var/run

Should I change anything?

- Grant



[gentoo-user] Shorewall: iptables: No chain/target/match by that name.

2013-02-09 Thread Grant
I'm getting the following when restarting shorewall:

# /etc/init.d/shorewall restart
 * Stopping firewall ...
 * Starting firewall ...
iptables: No chain/target/match by that name.

How can I find out which chain/target/match I need to compile into the
kernel?  shorewall-init.log does not indicate any problems and I have
LOG_VERBOSITY=2 in shorewall.conf which is the maximum.

- Grant



Re: [gentoo-user] KDE 4.10 and plasma crashing

2013-02-09 Thread Dale
Nilesh Govindrajan wrote:
 Dale, which version of GCC are you using? I'm on 4.7.2, updating KDE
 now, will report back after a few hours when it's done. -- Nilesh
 Govindarajan http://nileshgr.com 

At first, gcc-4.5.4.  The second time, gcc-4.6.3.  According to the
roach report, they have a fix for this.  Just waiting for it to travel
from the top to the bottom. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] What to do with /var/run?

2013-02-09 Thread J. Roeleveld
Grant emailgr...@gmail.com wrote:

I received the following ELOG message after an emerge:

 * One or more symlinks to directories have been preserved in order to
* ensure that files installed via these symlinks remain accessible.
This
* indicates that the mentioned symlink(s) may be obsolete remnants of
an
* old install, and it may be appropriate to replace a given symlink
with
 * the directory that it points to.
 *
 * /var/run

Should I change anything?

- Grant

If you do shorewall safe-restart (not using the init-script) you see logging 
on the screen.

--
Joost Roeleveld
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.



Re: [gentoo-user] Shorewall: iptables: No chain/target/match by that name.

2013-02-09 Thread Dan Johansson
On 02/10/13 06:19, Grant wrote:
 I'm getting the following when restarting shorewall:
 
 # /etc/init.d/shorewall restart
  * Stopping firewall ...
  * Starting firewall ...
 iptables: No chain/target/match by that name.
 
 How can I find out which chain/target/match I need to compile into the
 kernel?  shorewall-init.log does not indicate any problems and I have
 LOG_VERBOSITY=2 in shorewall.conf which is the maximum.

I hade the same problem. Using shorewall trace restart I could figure
out which chain/target/match that was missing.

Regards.
-- 
Dan Johansson, http://www.dmj.nu
***
This message is printed on 100% recycled electrons!
***


0x2FB894AD.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] What to do with /var/run?

2013-02-09 Thread J. Roeleveld
J. Roeleveld jo...@antarean.org wrote:

Grant emailgr...@gmail.com wrote:

I received the following ELOG message after an emerge:

 * One or more symlinks to directories have been preserved in order to
* ensure that files installed via these symlinks remain accessible.
This
* indicates that the mentioned symlink(s) may be obsolete remnants of
an
* old install, and it may be appropriate to replace a given symlink
with
 * the directory that it points to.
 *
 * /var/run

Should I change anything?

- Grant

If you do shorewall safe-restart (not using the init-script) you see
logging on the screen.

--
Joost Roeleveld

Oops...
Replied on wrong thread. This was meant for the shorewall one.

I blame the mail client on the phone...
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.