Re: [CentOS-docs] Images for CentOS Documentation

2011-03-07 Thread Ralph Angenendt
Am 04.03.11 17:06, schrieb Andreas Rogge:
 I'm currently porting the public and free parts of Red Hat Documentation 
 to CentOS.
 Being unable to do anything graphics-related, I need someone to provide 
 the following images:
 
 logo.svg  300x140 CentOS Logo
 image_left.png124x39  CentOS Logo
 image_right.png   120x41  CentOS Documentation Logo (to be designed)

a) and b) shouldn't be a problem, Ican do those tomorrow. Regarding c) -
for what is that needed? How does that look within RHEL? Probably can do
one too, but need to know what it stands for :)

Sorry for the late reply, wasn't really there over the weekend.

Regards and thanks,

Ralph
___
CentOS-docs mailing list
CentOS-docs@centos.org
http://lists.centos.org/mailman/listinfo/centos-docs


[CentOS] connection speeds between nodes

2011-03-07 Thread wessel van der aart
Hi All,

I've been asked to setup a 3d renderfarm at our office , at the start it 
will contain about 8 nodes but it should be build at growth. now the 
setup i had in mind is as following:
All the data is already stored on a StorNext SAN filesystem (quantum ) 
this should be mounted on a centos server trough fiber optics  , which 
in its turn shares the FS over NFS to all the rendernodes (also centos).

Now we've estimated that the average file send to each node will be 
about 90MB , so that's what i like the average connection to be, i know 
that gigabit ethernet should be able to that (testing with iperf 
confirms that) but testing the speed to already existing nfs shares 
gives me a 55MB max. as i'm not familiar with network shares performance 
tweaking is was wondering if anybody here did and could give me some 
info on this?
Also i thought on giving all the nodes 2x1Gb-eth ports and putting those 
in a BOND, will do this any good or do i have to take a look a the nfs 
server side first?

thanks,

Wessel
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: grep regex pointer appreciated

2011-03-07 Thread Robert Grasso
Hello,

On my opinion, grep is not powerful enough in order to achieve what you want. 
It would be preferable to use at least some (old but
powerful) tools such sed, awk, or even better : perl. Actually, what you need 
is a tool providing a capture buffer (this is perl
jargon - back references in sed jargon) in which you can get the string you 
want to extract, rather than trying to build up a
positive matching regex, as the string boundaries seem to be easy enough to 
describe with regexs.

Regards

---
Robert GRASSO – System engineer

CEDRAT S.A.
15 Chemin de Malacher - Inovallée - 38246 MEYLAN cedex - FRANCE 
Phone: +33 (0)4 76 90 50 45 - Fax: +33 (0)4 56 38 08 30
mailto:robert.gra...@cedrat.com - http://www.cedrat.com  

 -Message d'origine-
 De : centos-boun...@centos.org 
 [mailto:centos-boun...@centos.org] De la part de Patrick Lists
 Envoyé : 5 mars 2011 23:14
 À : CentOS mailing list
 Objet : [CentOS] OT: grep regex pointer appreciated
 
 Hi,
 
 My grep regex foo is not very good and googling is getting me 
 nowhere so 
 hopefully someone is kind enough to give me some pointers.
 
 Goal: grep (non .dbg) filenames and versions from a ftp dir 
 listing and 
 a raw html file:
 
 $ wget --no-remove-listing -O ftp-index.txt ftp://127.0.0.1/test/
 $ wget --no-remove-listing -O index.html http://127.0.0.1/test/
 
 The relevant parts of the files above (first one is ftp 
 listing, second 
 part is the html file, both copied to test_regex.txt) are:
 
 2011 Jan 28 21:25  File  a 
 href=ftp://127.0.0.1/bar-4.5.6.i686.dbg.tgz;bar-4.5.6.i686.d
 bg.tgz/a 
   (5551274 bytes)
 2011 Jan 28 21:25  File  a 
 href=ftp://127.0.0.1/bar-4.5.6.i686.tgz;bar-4.5.6.i686.tgz/a 
 (5551274 bytes)
 2011 Jan 28 21:25  File  a 
 href=ftp://127.0.0.1/bar-4.5.6.x86_64.dbg.tgz;bar-4.5.6.x86_
 64.dbg.tgz/a 
   (5551274 bytes)
 2011 Jan 28 21:25  File  a 
 href=ftp://127.0.0.1/bar-4.5.6.x86_64.tgz;bar-4.5.6.x86_64.tgz/a 
 (5551274 bytes)
 
 trtda 
 href=foo-bar-1.2.3+1.2.3.tar.gzfoo-bar-1.2.3+1.2.3.tar.gz/td/tr
 
 This is what I now have (improvements most welcome):
 
 $ egrep -o 
 ([A-Za-z_-]+)([[:digit:]]{1,3}(\.[[:digit:]]{1,3})*).+(.|t)gz 
 ./test_regex.txt | grep -v .dbg | tr -d ''
 
 Output:
 
 foo-bar-1.2.3+1.2.3.tar.gz
 baz-4.5.6.i686.tgz
 baz-4.5.6.x86_64.tgz
 
 So far so good but now I also want to get the version numbers which I 
 can't figure out. Anyone have a pointer how to get the version number 
 from these filenames (1.2.3+1.2.3 and 4.5.6)?
 
 Thanks!
 Patrick
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] BUG: soft lockup CPU stuck for 10seconds (Server went down)

2011-03-07 Thread Alexander Georgiev
2011/3/7 Alexander Dalloz ad+li...@uni-x.org:
 Am 07.03.2011 08:46, schrieb Frank Cox:

 Roland's screencopy shows a java process rather than openswan.



indeed, could it be http://www.iss.net/threats/414.html DoS? I would
not expect that this is happening in the kernel, though...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Load balancing...

2011-03-07 Thread Nico Kadel-Garcia
On Mon, Mar 7, 2011 at 1:36 AM, David Brian Chait dch...@invenda.com wrote:

 On Mon, Mar 7, 2011 at 4:40 AM, Tim Dunphy bluethu...@gmail.com wrote:
 however for my purpose open and free HAProxy remains best choice!!

 +1 for HAProxy; excellent piece of software.

 It really depends on your needs, if you are building a production ops 
 environment then the last thing that you would want would be an 
 unsupported/home grown solution. You need to consider the potential risks 
 involved in implementing a poorly understood / virtually unsupported solution 
 that in all likelihood only you would understand vs. a standard solution with 
 an SLA behind it and an upgrade path going forward.

Or in implementing an expensive, single point of failure third party
device that requires a centralized control infrastructure. It can turn
out to be a *very* expensive single point of failure, easily screwed
up by a single upgrade or a single power supply issues or a failure to
do failover networking to that device properly.

Round-robin DNS is also, unfortunately, often mishandled. People
mistake changing the ordering of listed A records for round-robin and,
to quote Wikipedia:

   There is no standard procedure for deciding which address will
be used by the requesting application.

No such procedure. Zip, zero, nada, it's all client dependent. And if
one of the IP's is on the same VLAN as the requesting host, you're
*especially* likely to get all the traffic locked to that host, and
DNS caches when you disable an IP can take rather unpredictable
amounts of time to expire because every smart aleck downstream is
doing their own caching and passing it along.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread John Hodrien
On Sat, 5 Mar 2011, Nico Kadel-Garcia wrote:

 On Fri, Mar 4, 2011 at 7:57 AM, John Hodrien j.h.hodr...@leeds.ac.uk wrote:
 On Fri, 4 Mar 2011, Nico Kadel-Garcia wrote:

 Contemporary versions of git, subversion, and OpenSSH built-in. I'm
 particularly looking forward to the built-in chroot capabilities and
 GSSAPI support in OpenSSH, and the major release improvements to git
 and subversion.

 What does the new GSSAPI support do for you?

 Single sign-on. Your Windows clients, in the right environment, can
 have their Kerberos tickets managed to allow Kerberos tickets, not
 authorized_keys, to be used very effectively and reduce typing
 !@#$!@#$ passwords or manipulating SSH keys. The development version
 of Putty also has this built right in, though it's not made it to the
 production version yet.

But that works just nicely with CentOS 5.  I use GSSAPI together with kerberos
tickets plucked out of Active Directory.  Enable GSSAPIDelegateCredentials and
it'll throw your ticket to the remote side, so you can merrily use your
kerberos ticket there too.

jh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS and Marvell SAS/SATA drivers

2011-03-07 Thread Nico Kadel-Garcia
On Sun, Mar 6, 2011 at 10:07 PM, Charles Polisher cpol...@surewest.net wrote:

 https://secure.wikimedia.org/wikipedia/en/wiki/Fakeraid#Firmware.2Fdriver-based_RAID
 covers fake RAID.

Ouch. That was *precisely* why I used the 2410, not the 1420, SATA
card, some years back. It was nominally more expensive but well worth
the reliability and support, which was very good for RHEL and CentOS.

I hadn't been thinking about that HostRaid messiness because I read
the reviews and avoided it early.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread Nico Kadel-Garcia
On Mon, Mar 7, 2011 at 6:53 AM, John Hodrien j.h.hodr...@leeds.ac.uk wrote:
 On Sat, 5 Mar 2011, Nico Kadel-Garcia wrote:

 On Fri, Mar 4, 2011 at 7:57 AM, John Hodrien j.h.hodr...@leeds.ac.uk wrote:
 On Fri, 4 Mar 2011, Nico Kadel-Garcia wrote:

 Contemporary versions of git, subversion, and OpenSSH built-in. I'm
 particularly looking forward to the built-in chroot capabilities and
 GSSAPI support in OpenSSH, and the major release improvements to git
 and subversion.

 What does the new GSSAPI support do for you?

 Single sign-on. Your Windows clients, in the right environment, can
 have their Kerberos tickets managed to allow Kerberos tickets, not
 authorized_keys, to be used very effectively and reduce typing
 !@#$!@#$ passwords or manipulating SSH keys. The development version
 of Putty also has this built right in, though it's not made it to the
 production version yet.

 But that works just nicely with CentOS 5.  I use GSSAPI together with kerberos
 tickets plucked out of Active Directory.  Enable GSSAPIDelegateCredentials and
 it'll throw your ticket to the remote side, so you can merrily use your
 kerberos ticket there too.

Have you backported OpenSSH 5.x to CentOS 5? Because I don't see the
full features set without OpenSSH 5.x, such as GSSApiKeyExchange.

Hmm. What you've described is an ssh_config option, which is set to
no by default.  I'll have to look into that. There have been some
interesting. traction issues with using the backported OpenSSH 5.x
I'm currently reliant on for CentOS 5 and RHEL 5.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] connection speeds between nodes

2011-03-07 Thread Rafa Griman
Hi :)

On Mon, Mar 7, 2011 at 12:12 PM, wessel van der aart
wes...@postoffice.nl wrote:
 Hi All,

 I've been asked to setup a 3d renderfarm at our office , at the start it
 will contain about 8 nodes but it should be build at growth. now the
 setup i had in mind is as following:
 All the data is already stored on a StorNext SAN filesystem (quantum )
 this should be mounted on a centos server trough fiber optics  , which
 in its turn shares the FS over NFS to all the rendernodes (also centos).


From what I can read, you have 1 NFS server only and a separate
StoreNext MDC. Is this correct?


 Now we've estimated that the average file send to each node will be
 about 90MB , so that's what i like the average connection to be, i know
 that gigabit ethernet should be able to that (testing with iperf
 confirms that) but testing the speed to already existing nfs shares
 gives me a 55MB max. as i'm not familiar with network shares performance
 tweaking is was wondering if anybody here did and could give me some
 info on this?
 Also i thought on giving all the nodes 2x1Gb-eth ports and putting those
 in a BOND, will do this any good or do i have to take a look a the nfs
 server side first?


Things to check would be:
 - Hardware:
  * RAM and cores on the NFS server
  * # of GigE  FC ports
  * PCI technology you're using: PCIe, PCI-X, ...
  * PCI lanes  bandwidth you're using up
  * if you are sharing PCI buses between different PCI boards
(FC and GigE): you should NEVER do this. If you have to share a PCI
bus, share it between two PCI devices which are the same. That is you
can share a PCI bus between 2 GigE cards or between 2 FC cards, but
never mix the devices.
  * cabling
  * switch configuration
  * RAID configuration
  * cache configuration on the RAID controller. Cache
mirroring gives you more protection, but less performance.

 - software:
  * check the NFS config. There are some interesting tips if
you google around.

HTH

   Rafa
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread John Hodrien
On Mon, 7 Mar 2011, Nico Kadel-Garcia wrote:

 Have you backported OpenSSH 5.x to CentOS 5? Because I don't see the
 full features set without OpenSSH 5.x, such as GSSApiKeyExchange.

Nope, I like the simple life.

 Hmm. What you've described is an ssh_config option, which is set to
 no by default.  I'll have to look into that. There have been some
 interesting. traction issues with using the backported OpenSSH 5.x
 I'm currently reliant on for CentOS 5 and RHEL 5.

I'm stock 5.5:

openssh-server-4.3p2-41.el5_5.1
openssh-4.3p2-41.el5_5.1
openssh-clients-4.3p2-41.el5_5.1

Server needs:

GSSAPIAuthentication yes
GSSAPICleanupCredentials yes

Most probably you also want:

AllowGroups blah

Client needs:

GSSAPIAuthentication yes

If you want key forwarding, you also need:

GSSAPIDelegateCredentials yes

Works like a charm, and GSSAPI auth works with putty, delegation doesn't seem
to.

jh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Octet

2011-03-07 Thread Kevin Thorpe
On 06/03/2011 13:44, Always Learning wrote:
 I also saw Honeywell upgrading a L66 machine so it would run faster. The
 engineer pulled-out a PCB and took it away. That 'upgrade' cost over 1
 million NLG (Dutch guilders).
Very annoying those big iron companies. We had two banks of ICL Eagle 
drives (10GB in five full height filing cabinet sized boxes). We 
upgraded to Albatrosses (20GB) for a mill or so (don't know the actual 
price). All the engineer did was swap a couple of jumpers and told us to 
reformat in M2FM instead of MFM. Definitely worth the money.

The other one, much later was a 3 x 1GB upgrade for a GA mini. £3k we 
were quoted when we could buy the drives for about £250 each. The 
supplier said 'fine but we're still charging £3k for the authorisation code'

Now the nearest to specialised hardware we use are Dell servers so we 
can't be held hostage.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Migrating standalone systems to xen guests

2011-03-07 Thread Kevin Thorpe

 On Fri, Mar 4, 2011 at 4:57 AM, Simon Mattersimon.mat...@invoca.ch
 wrote:
 On Fri, Mar 04, 2011 at 10:31:18AM +0200, Jussi Hirvi wrote:
 Is there any (easy?) way to migrate running standalone CentOS 4 or 5
 systems to xen virtual stacks?
I playes with VMware ages ago and it was the only solution at the time 
which could boot an existing installed drive. My home installation was 
in a drive caddy and I could stick it in the server in the office and 
boot it in VMware quite happily. Magic!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread Nico Kadel-Garcia
On Mon, Mar 7, 2011 at 7:14 AM, John Hodrien j.h.hodr...@leeds.ac.uk wrote:
 On Mon, 7 Mar 2011, Nico Kadel-Garcia wrote:

 Have you backported OpenSSH 5.x to CentOS 5? Because I don't see the
 full features set without OpenSSH 5.x, such as GSSApiKeyExchange.

 Nope, I like the simple life.

 Hmm. What you've described is an ssh_config option, which is set to
 no by default.  I'll have to look into that. There have been some
 interesting. traction issues with using the backported OpenSSH 5.x
 I'm currently reliant on for CentOS 5 and RHEL 5.

 I'm stock 5.5:

 openssh-server-4.3p2-41.el5_5.1
 openssh-4.3p2-41.el5_5.1
 openssh-clients-4.3p2-41.el5_5.1

 Server needs:

 GSSAPIAuthentication yes
 GSSAPICleanupCredentials yes

 Most probably you also want:

 AllowGroups blah

 Client needs:

 GSSAPIAuthentication yes

 If you want key forwarding, you also need:

 GSSAPIDelegateCredentials yes

 Works like a charm, and GSSAPI auth works with putty, delegation doesn't seem
 to.

If this works, you've just solved a *BIG* problem for me: I'd been
handed (ordered before I arrived on the site) the issues of getting
Centrify OpenSSH to play nicely, and this avoids the OpenSSH 5.x does
not read .bashrc and read user aliases for remote ssh commands
problem I've been facing, while preserving the effective GSSAPI
credentials handling.

*Good* admin. And are you coming to the Boston are, so I can buy you a
decent local beer? (I'm not in London anymore.)  Why aren't you over
on the comp.security.ssh?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread John Hodrien
On Mon, 7 Mar 2011, Nico Kadel-Garcia wrote:

 If this works, you've just solved a *BIG* problem for me: I'd been
 handed (ordered before I arrived on the site) the issues of getting
 Centrify OpenSSH to play nicely, and this avoids the OpenSSH 5.x does
 not read .bashrc and read user aliases for remote ssh commands
 problem I've been facing, while preserving the effective GSSAPI
 credentials handling.

Tested this with regular MIT kerberos under CentOS some time ago, but am
actually running it against Active Directory currently.

 *Good* admin. And are you coming to the Boston are, so I can buy you a
 decent local beer? (I'm not in London anymore.)  Why aren't you over
 on the comp.security.ssh?

Too many groups, too little time.  Tell you what, solve all the niggly little
problems I've had with kerberised NFSv4 with CentOS5, and we'll call it quits.

jh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Octet

2011-03-07 Thread Simon Matter
 On 06/03/2011 13:44, Always Learning wrote:
 I also saw Honeywell upgrading a L66 machine so it would run faster. The
 engineer pulled-out a PCB and took it away. That 'upgrade' cost over 1
 million NLG (Dutch guilders).
 Very annoying those big iron companies. We had two banks of ICL Eagle
 drives (10GB in five full height filing cabinet sized boxes). We
 upgraded to Albatrosses (20GB) for a mill or so (don't know the actual
 price). All the engineer did was swap a couple of jumpers and told us to
 reformat in M2FM instead of MFM. Definitely worth the money.

In case of NC machines it was quite common that the amount of memory
usable was just a configuration setting. After you paid a horrible amount
of money a service engineer came, entered a special code, reconfigured the
amount of memory and that was it. With a modem connection it could even be
done remotely :)

Simon

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread Nico Kadel-Garcia
On Mon, Mar 7, 2011 at 7:56 AM, John Hodrien j.h.hodr...@leeds.ac.uk wrote:
 On Mon, 7 Mar 2011, Nico Kadel-Garcia wrote:

 If this works, you've just solved a *BIG* problem for me: I'd been
 handed (ordered before I arrived on the site) the issues of getting
 Centrify OpenSSH to play nicely, and this avoids the OpenSSH 5.x does
 not read .bashrc and read user aliases for remote ssh commands
 problem I've been facing, while preserving the effective GSSAPI
 credentials handling.

 Tested this with regular MIT kerberos under CentOS some time ago, but am
 actually running it against Active Directory currently.

 *Good* admin. And are you coming to the Boston are, so I can buy you a
 decent local beer? (I'm not in London anymore.)  Why aren't you over
 on the comp.security.ssh?

 Too many groups, too little time.  Tell you what, solve all the niggly little
 problems I've had with kerberised NFSv4 with CentOS5, and we'll call it quits.

Ahh, I'll just trade you this fine lease on swampland in Florida for
your first born, shall I?

NFSv4 is *NOT* your friend, and Kerberizing it effectively is not
trivial. I'm using Centrify for that and to have a reliable upstream
vendor who can actually support it. (I'm on a contract.) What's the
issue you're encountering, besides the lack of nfs4-acl-editor in
the RPM's.

nfs4-acl-editor is actually built into the nfs4 tools source tree,
it's just not compiled. It's not a perfect tool, but I think well
worth getting into the extras repository for CentOS.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: grep regex pointer appreciated

2011-03-07 Thread Patrick Lists
On 03/07/2011 12:23 PM, Robert Grasso wrote:
 Hello,

 On my opinion, grep is not powerful enough in order to achieve what you want. 
 It would be preferable to use at least some (old but
 powerful) tools such sed, awk, or even better : perl. Actually, what you need 
 is a tool providing a capture buffer (this is perl
 jargon - back references in sed jargon) in which you can get the string you 
 want to extract, rather than trying to build up a
 positive matching regex, as the string boundaries seem to be easy enough to 
 describe with regexs.

Thank you for your advice. After much fiddling I came up with something 
that seems to work. I have never dabbled with perl but will dig up my 
sed/awk book and see if there's a more elegant way to do this.

Regards,
Patrick
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread John Hodrien
On Mon, 7 Mar 2011, Nico Kadel-Garcia wrote:

 NFSv4 is *NOT* your friend, and Kerberizing it effectively is not
 trivial. I'm using Centrify for that and to have a reliable upstream
 vendor who can actually support it. (I'm on a contract.) What's the
 issue you're encountering, besides the lack of nfs4-acl-editor in
 the RPM's.

With a CentOS 5 server and a CentOS 5 client, I've yet to manage to get it
play nicely for long periods without deciding that I'm evil.  Sometimes it
works fine, then a reboot or a minor tinker that I'm sure shouldn't affect
anything will leave it refusing to mount with Operation not permitted.  Or
it'll let me mount it as root, but as soon as I use it with a kerberos ticket
will have a big long pause before deciding it doesn't like me.  Client works
fine against an EMC box, and I've had the server working before I started
using Active Directory.

 nfs4-acl-editor is actually built into the nfs4 tools source tree,
 it's just not compiled. It's not a perfect tool, but I think well
 worth getting into the extras repository for CentOS.

nfs4-acl-tools-0.3.3-1.el5, standard in CentOS.  That not do what you need?

jh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] connection speeds between nodes

2011-03-07 Thread Ross Walker
On Mar 7, 2011, at 6:12 AM, wessel van der aart wes...@postoffice.nl wrote:

 Hi All,
 
 I've been asked to setup a 3d renderfarm at our office , at the start it 
 will contain about 8 nodes but it should be build at growth. now the 
 setup i had in mind is as following:
 All the data is already stored on a StorNext SAN filesystem (quantum ) 
 this should be mounted on a centos server trough fiber optics  , which 
 in its turn shares the FS over NFS to all the rendernodes (also centos).
 
 Now we've estimated that the average file send to each node will be 
 about 90MB , so that's what i like the average connection to be, i know 
 that gigabit ethernet should be able to that (testing with iperf 
 confirms that) but testing the speed to already existing nfs shares 
 gives me a 55MB max. as i'm not familiar with network shares performance 
 tweaking is was wondering if anybody here did and could give me some 
 info on this?
 Also i thought on giving all the nodes 2x1Gb-eth ports and putting those 
 in a BOND, will do this any good or do i have to take a look a the nfs 
 server side first?

1Gbe can do 115MB/s @ 64K+ IO size, but at 4k IO size (NFS) 55MB/s is about it.

If you need each node to be able to read 90-100MB/s you would need to setup a 
cluster file system using iSCSI or FC and make sure the cluster file system can 
handle large block/cluster sizes like 64K or the application can handle large 
IOs and the scheduler does a good job of coalescing these (VFS layer breaks it 
into 4k chunks) into large IOs.

It's the latency of each small IO that is killing you.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] yum tries to install a mix of architectures

2011-03-07 Thread Tim Dunphy
Hello,

 On my centos boxes whenever I try to install packages I get a mix of
packages from the repos that are both i386 and x86_64 in
archictecture:

===
 Package Arch
 Version
  RepositorySize
===
Installing:
 boost-devel i386
 1.33.1-10.el5
  base 4.3 M
 boost-devel x86_64
 1.33.1-10.el5
  base 4.4 M
Installing for dependencies:
 boost   i386
 1.33.1-10.el5
  base 863 k
 boost   x86_64
 1.33.1-10.el5
  base 861 k
 libicu  i386
 3.6-5.11.4
  base 5.2 M
 libicu  x86_64
 3.6-5.11.4
  base 5.2 M

Transaction Summary
===
Install   6 Package(s)
Upgrade   0 Package(s)




Without having so specify the arch on each yum command how can I
automatically prune my yum repo files so that it will only grab
packages that relate to the architecture I'm running?


Thanks in advance!


-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] connection speeds between nodes

2011-03-07 Thread John Hodrien
On Mon, 7 Mar 2011, Ross Walker wrote:

 1Gbe can do 115MB/s @ 64K+ IO size, but at 4k IO size (NFS) 55MB/s is about
 it.

 If you need each node to be able to read 90-100MB/s you would need to setup
 a cluster file system using iSCSI or FC and make sure the cluster file
 system can handle large block/cluster sizes like 64K or the application can
 handle large IOs and the scheduler does a good job of coalescing these (VFS
 layer breaks it into 4k chunks) into large IOs.

 It's the latency of each small IO that is killing you.

I'm not necessarily convinced it's quite that bad (here's some default NFSv3
mounts under CentOS 5.5, with Jumbo frames, rsize=32768,wsize=32768).

$ sync;time (dd if=/dev/zero of=testfile bs=1M count=1;sync)
[I verified that it'd finished when it thought it had]
1048576 bytes (10 GB) copied, 133.06 seconds, 78.8 MB/s

umount, mount (to clear any cache):

$ dd if=testfile of=/dev/null bs=1M
1048576 bytes (10 GB) copied, 109.638 seconds, 95.6 MB/s

This machine only has a double-bonded gig interface so with four clients all
hammering at the same time, this gives:

$ dd if=/scratch/testfile of=/dev/null bs=1M
1048576 bytes (10 GB) copied, 189.64 seconds, 55.3 MB/s

So with four clients (on single gig) and one server with two gig interfaces
you're getting an aggregate rate of 220Mbytes/sec.  Sounds pretty reasonable
to me!

If you want safe writes (sync), *then* latency kills you.

jh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Antwort: yum tries to install a mix of architectures

2011-03-07 Thread Andreas Reschke
centos-boun...@centos.org schrieb am 07.03.2011 15:41:04:

 Tim Dunphy bluethu...@gmail.com 
 Gesendet von: centos-boun...@centos.org
 
 07.03.2011 15:41
 
 Bitte antworten an
 CentOS mailing list centos@centos.org
 
 An
 
 CentOS mailing list centos@centos.org
 
 Kopie
 
 Thema
 
 [CentOS] yum tries to install a mix of architectures
 
 Hello,
 
  On my centos boxes whenever I try to install packages I get a mix of
 packages from the repos that are both i386 and x86_64 in
 archictecture:
 
 
===
  Package Arch
  Version
   RepositorySize
 
===
 Installing:
  boost-devel i386
  1.33.1-10.el5
   base 4.3 M
  boost-devel x86_64
  1.33.1-10.el5
   base 4.4 M
 Installing for dependencies:
  boost   i386
  1.33.1-10.el5
   base 863 k
  boost   x86_64
  1.33.1-10.el5
   base 861 k
  libicu  i386
  3.6-5.11.4
   base 5.2 M
  libicu  x86_64
  3.6-5.11.4
   base 5.2 M
 
 Transaction Summary
 
===
 Install   6 Package(s)
 Upgrade   0 Package(s)
 
 
 
 
 Without having so specify the arch on each yum command how can I
 automatically prune my yum repo files so that it will only grab
 packages that relate to the architecture I'm running?
 
 
 Thanks in advance!
 
 
 -- 
 GPG me!!
 
 gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

Hi Tim Dunphy,
that's a normal way when you're using x86_64. Do you need 32bit software? 
If not you can remove them with yum remove *386.
 
 
Gruß 
Andreas Reschke

Unix/Linux-Administration
andreas.resc...@behrgroup.com___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Sean Carolan
Can anyone point out reasons why it might be a bad idea to put this
sort of line in your /etc/hosts file, eg, pointing the FQDN at the
loopback address?

127.0.0.1hostname.domain.com hostname   localhost localhost.localdomain
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread Laurence Hurst
On Thu, Mar 03, 2011 at 03:11:52PM +, Digimer wrote:
 How about the rest of you? What are you looking forward to in CentOS 6
 when it is released?
 
For me the big wins with CentOS 6 should be SSSD to simplify and centralise (on 
the machine) network authentication and (hopefully!) graphics drivers which 
work with our hardware out-of-the box.

Laurence
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Alexander Arlt
Am 03/07/2011 05:34 PM, schrieb Sean Carolan:
 Can anyone point out reasons why it might be a bad idea to put this
 sort of line in your /etc/hosts file, eg, pointing the FQDN at the
 loopback address?

 127.0.0.1hostname.domain.com hostname   localhost localhost.localdomain

First, if your host is actually communicating with any kind of ip-based 
network, it is quite certain, that 127.0.0.1 simply isn't his IP 
address. And, at least for me, that's a fairly good reason.

Second, sendmail had the habit of breaking if your hostname was mapped 
to 127.0.0.1, but I stopped using sendmail a decade ago, so I can't 
verify this. :)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Sean Carolan
 First, if your host is actually communicating with any kind of ip-based
 network, it is quite certain, that 127.0.0.1 simply isn't his IP
 address. And, at least for me, that's a fairly good reason.

Indeed.  It does seem like a bad idea to have a single host using
loopback, while the rest of the network refers to it by it's real IP
address.

 Second, sendmail had the habit of breaking if your hostname was mapped
 to 127.0.0.1, but I stopped using sendmail a decade ago, so I can't
 verify this. :)

The reason this came up is because one of our end-users requested such
a setup in the /etc/hosts file, and I didn't think it was a good idea.
 Seems it would be better to fix the application(s) that require the
data to use the real network IP address.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Alexander Arlt
Am 03/07/2011 05:49 PM, schrieb Sean Carolan:
  First, if your host is actually communicating with any kind of ip-based
  network, it is quite certain, that 127.0.0.1 simply isn't his IP
  address. And, at least for me, that's a fairly good reason.
 
  Indeed.  It does seem like a bad idea to have a single host using
  loopback, while the rest of the network refers to it by it's real IP
  address.

Acknowledged. At least it will save you a lot of time next year, when 
you have forgotten about that and are wondering why every machine on the 
network can reach a service and only the host itself can't (or vice 
versa...).

  Second, sendmail had the habit of breaking if your hostname was mapped
  to 127.0.0.1, but I stopped using sendmail a decade ago, so I can't
  verify this. :)
 
  The reason this came up is because one of our end-users requested such
  a setup in the /etc/hosts file, and I didn't think it was a good idea.
  Seems it would be better to fix the application(s) that require the
  data to use the real network IP address.

Most of the time it's a good idea to fix applications before ravishing 
your network setup to make it work. :)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread John Hodrien
On Mon, 7 Mar 2011, Laurence Hurst wrote:

 On Thu, Mar 03, 2011 at 03:11:52PM +, Digimer wrote:
 How about the rest of you? What are you looking forward to in CentOS 6
 when it is released?

 For me the big wins with CentOS 6 should be SSSD to simplify and centralise
 (on the machine) network authentication and (hopefully!) graphics drivers
 which work with our hardware out-of-the box.

Yes, SSSD is of interest to me too.  The last version I used was sufficiently
less adept at matching winbind or nss_ldap in functionality that it wasn't all
the good for use against Active Directory.  I'm assuming nested group handling
has improved somewhat since I last tried it with CentOS 5, which was the
killer when I last tried.

It certainly sounds like a massively improved model compared to nss_ldap,
you'd hope for much better resilience and performance.

I'm not sure I see graphics drivers as a big deal.  Have your own local repo,
add in a suitable package from elrepo, and install it at kickstart time.

jh
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] BUG: soft lockup CPU stuck for 10seconds (Server went down)

2011-03-07 Thread James A. Peltier
- Original Message -
| Hello,
| 
| Today my server stopped responding.
| i went to the console and on the screen there were a continuous loop
| of the following info shown on the screen:
| 
| BUG: soft lockup - CPU#0 stuck for 10s! [java:13959]
| 
| and alot of other information.
| ii've took a screen shot of the info shown , you can find it under the
| following url: http://img585.imageshack.us/i/img00012201103070833.jpg/
| and had to hard reset for it to be back up and running.
| 
| i tried googling with no luck for direct relevant info.
| so hoping you can help out
| 
| Thanks,
| 
| --Roland
| 
| ___
| CentOS mailing list
| CentOS@centos.org
| http://lists.centos.org/mailman/listinfo/centos

This is likely due to thread deadlock.  What is the load on the machine at the 
time that this error occurs?  I've seen this very error when running The 
Mathworks Distributed Computing Toolbox server.  The machine would become very 
unsettled when it tried to run on more than four 2 CPUs.  How many CPUs are 
in this system?

-- 
James A. Peltier
IT Services - Research Computing Group
Simon Fraser University - Burnaby Campus
Phone   : 778-782-6573
Fax : 778-782-3045
E-Mail  : jpelt...@sfu.ca
Website : http://www.sfu.ca/itservices
  http://blogs.sfu.ca/people/jpeltier


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] IPERF Server

2011-03-07 Thread Matt
When starting IPERF with iperf -s or iperf -sD it seems to stop
after client runs its first test.  I would like to leave it running
for a few hours to give someone a chance to run a few tests.  Is there
a way to leave it active on the server and kill it manually later?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Enscript

2011-03-07 Thread Hal Davison
Greetings..

Yes ENSCRIPT is a text to PostScript 
conversion service.

As usual, am a bi confused on how to 
implement the fit-to-page functionality. 
Google resources say it is used then 
proceeds to dance around the issue

Using the -ffontname@W/H option can one 
calculate the necessary dimensions  for 
the print job consisting of over 200 pages?

--Hal.

-- 
Hal Davison
Observe Goal, Set the course, Burn the map
Davison Consulting
This correspondence was composed using
Dragon Speaking Version 10
Peg#: 2007011701

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] IPERF Server

2011-03-07 Thread Brunner, Brian T.
centos-boun...@centos.org wrote:
 When starting IPERF with iperf -s or iperf -sD it seems to stop
 after client runs its first test.  I would like to leave it running
 for a few hours to give someone a chance to run a few tests.  Is there
 a way to leave it active on the server and kill it manually later?
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

http://sourceforge.net/projects/iperf/
http://iperf.sourceforge.net/

This has its own mailinglist with archived messages here:
http://archive.ncsa.illinois.edu/lists/iperf-users/


Google lists many hits on the keyword iperf.

I'm not sure what you're doing that this should be on a CentOS list.

Insert spiffy .sig here


//me
***
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom
they are addressed. If you have received this email in error please
notify the system manager. This footnote also confirms that this
email message has been swept for the presence of computer viruses.
www.Hubbell.com - Hubbell Incorporated**

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] OT: grep regex pointer appreciated

2011-03-07 Thread Bill Campbell
On Mon, Mar 07, 2011, Robert Grasso wrote:
Hello,

On my opinion, grep is not powerful enough in order to achieve what you
want. It would be preferable to use at least some (old but powerful) tools
such sed, awk, or even better : perl. Actually, what you need is a tool
providing a capture buffer (this is perl jargon - back references in sed
jargon) in which you can get the string you want to extract, rather than
trying to build up a positive matching regex, as the string boundaries seem
to be easy enough to describe with regexs.

One can use pcregrep which is grep that groks perl regular
expressions.

Bill
-- 
INTERNET:   b...@celestial.com  Bill Campbell; Celestial Software LLC
URL: http://www.celestial.com/  PO Box 820; 6641 E. Mercer Way
Voice:  (206) 236-1676  Mercer Island, WA 98040-0820
Fax:(206) 232-9186  Skype: jwccsllc (206) 855-5792

If the government can take a man's money without his consent, there is no
limit to the additional tyranny it may practise upon him; for, with his
money, it can hire soldiers to stand over him, keep him in subjection,
plunder him at discretion, and kill him if he resists.
Lysander Spooner, 1852
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread Simon Matter
 On Mon, 7 Mar 2011, Laurence Hurst wrote:

 On Thu, Mar 03, 2011 at 03:11:52PM +, Digimer wrote:
 How about the rest of you? What are you looking forward to in CentOS 6
 when it is released?

 For me the big wins with CentOS 6 should be SSSD to simplify and
 centralise
 (on the machine) network authentication and (hopefully!) graphics
 drivers
 which work with our hardware out-of-the box.

 Yes, SSSD is of interest to me too.  The last version I used was
 sufficiently
 less adept at matching winbind or nss_ldap in functionality that it wasn't
 all
 the good for use against Active Directory.  I'm assuming nested group
 handling
 has improved somewhat since I last tried it with CentOS 5, which was the
 killer when I last tried.

 It certainly sounds like a massively improved model compared to nss_ldap,
 you'd hope for much better resilience and performance.

 I'm not sure I see graphics drivers as a big deal.  Have your own local
 repo,
 add in a suitable package from elrepo, and install it at kickstart time.

Graphics drivers can be a big deal. For some netbook hardware I really
need Intel GMA3150 support but AFAIK that's a no go with EL5. I may be
wrong but, has anyone got it to work?

Simon

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS and Marvell SAS/SATA drivers

2011-03-07 Thread Chuck Munro


On 03/07/2011 09:00 AM, Nico Kadel-Garcia wrote:

 On Sun, Mar 6, 2011 at 10:07 PM, Charles Polishercpol...@surewest.net  
 wrote:

   
  https://secure.wikimedia.org/wikipedia/en/wiki/Fakeraid#Firmware.2Fdriver-based_RAID
   covers fake RAID.
 Ouch. That was*precisely*  why I used the 2410, not the 1420, SATA
 card, some years back. It was nominally more expensive but well worth
 the reliability and support, which was very good for RHEL and CentOS.

 I hadn't been thinking about that HostRaid messiness because I read
 the reviews and avoided it early.


Here's the latest info which I'll share ... it's good news, thankfully.

The problem with terrible performance on the LSI controller was traced 
to a flaky disk.  It turns out that if you examine 'dmesg' carefully 
you'll find a mapping of the controller's PHY to the id X string 
(thanks to an IT friend for that tip).  The LSI error messages have 
dropped from several thousand/day to maybe 4 or 5/day when stressed.

Now the LSI controller is busy re-syncing the arrays with speed 
consistently over 100,000K/sec, which is excellent.

My scepticism regarding SMART data continues ... the flaky drive showed 
no errors, and a full test and full zero-write using the WD diagnostics 
revealed no errors either.  If the drive is bad, there's no evidence 
that would cause WD to issue an RMA.

Regarding fake raid controllers, I use them in several small machines, 
but only as JBOD with software RAID.  I haven't used Adaptec cards for 
many years, mostly because their SCSI controllers back in the early days 
were junk.

Using RAID for protecting the root/boot drives requires one bit of extra 
work ... make sure you install grub in the boot sector of at least two 
drives so you can boot from an alternate if necessary.  CentOS/SL/RHEL 
doesn't do that for you, it only puts grub in the boot sector of the 
first drive in an array.

Chuck
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS and Marvell SAS/SATA drivers

2011-03-07 Thread John R Pierce
On 03/07/11 10:43 AM, Chuck Munro wrote:
 I haven't used Adaptec cards for
 many years, mostly because their SCSI controllers back in the early days
 were junk.

I blame Adaptec for the dominance of IDE.  Seriously.

If Adaptec A) hadn't had the lionshare of the SCSI mindset in the PC 
business back in the 90s, and B) hadn't made so much overpriced buggy 
crap, we'd all be using SCSI today.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] what wrong about my ipv6 address

2011-03-07 Thread ann kok
Hi 

I used the command ip -6 addr add 2001:DB8:CAFE:::12/64 dev eth0

to add ipv6 address and can see it in ifconfig 

but can't ping it

Why?

Thank you

# ping6 2001:db8:cafe:::12
PING 2001:db8:cafe:::12(2001:db8:cafe:::12) 56 data bytes
From ::1 icmp_seq=1 Destination unreachable: Address unreachable
From ::1 icmp_seq=2 Destination unreachable: Address unreachable
From ::1 icmp_seq=3 Destination unreachable: Address unreachable
From ::1 icmp_seq=5 Destination unreachable: Address unreachable
From ::1 icmp_seq=6 Destination unreachable: Address unreachable
From ::1 icmp_seq=7 Destination unreachable: Address unreachable
From ::1 icmp_seq=9 Destination unreachable: Address unreachable
From ::1 icmp_seq=10 Destination unreachable: Address unreachable
From ::1 icmp_seq=11 Destination unreachable: Address unreachable



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Keith Keller
On Mon, Mar 07, 2011 at 10:34:24AM -0600, Sean Carolan wrote:
 Can anyone point out reasons why it might be a bad idea to put this
 sort of line in your /etc/hosts file, eg, pointing the FQDN at the
 loopback address?
 
 127.0.0.1hostname.domain.com hostname   localhost localhost.localdomain

Would the application work with a hosts entry like this?

127.0.0.1hostname.dummy   localhost localhost.localdomain

(Make sure you pick .dummy so as not to interfere with any other DNS.)

In theory you could leave off .dummy, but then you risk hostname being
completed with the search domain in resolv.conf, which creates the
problems already mentioned with putting hostname.domain.com in
/etc/hosts.  (I have not tested this at all!)

--keith

-- 
kkel...@wombat.san-francisco.ca.us



pgpSKskAyQoWJ.pgp
Description: PGP signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Sean Carolan
 (Make sure you pick .dummy so as not to interfere with any other DNS.)

 In theory you could leave off .dummy, but then you risk hostname being
 completed with the search domain in resolv.conf, which creates the
 problems already mentioned with putting hostname.domain.com in
 /etc/hosts.  (I have not tested this at all!)

I will probably just leave this decision to the application
architects, with the recommendation that we should simply use DNS as
intended...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS and Marvell SAS/SATA drivers

2011-03-07 Thread compdoc
 My scepticism regarding SMART data continues ... the flaky drive
showed no errors, and a full test and full zero-write using the WD
diagnostics revealed no errors either.  If the drive is bad, there's
no evidence that would cause WD to issue an RMA.


I've been having a rash of drive failures recently and I have come to trust
SMART.

One thing's for sure - SMART is not implemented the same on all drives or
controllers. Recently one older Seagate drive showed no SMART capability in
linux using the gnome-disk-utility, but I could read the SMART data from the
drive in Windows with HD Tune.

It isn't infallible, but SMART is certainly one tool you can use in the
diagnosis. I wouldn't ignore Reallocated Sector counts or Current Pending
Sector counts, for instance.

Working for a customer this weekend, I replaced an older 60G WD drive that I
knew for months to have bad sectors, but the Reallocated Sector Count was
still 0. After a scan for errors with HD Tune, the Current Pending sector
count showed 13, but the Reallocated Sector Count never grew.

There is still a lot for me to learn - like the relationship between SMART
within the drive and the controller's support of SMART. You would think they
are independent of each other, but I wonder...


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread m . roth
Keith Keller wrote:
 On Mon, Mar 07, 2011 at 10:34:24AM -0600, Sean Carolan wrote:
 Can anyone point out reasons why it might be a bad idea to put this
 sort of line in your /etc/hosts file, eg, pointing the FQDN at the
 loopback address?

 127.0.0.1hostname.domain.com hostname   localhost
 localhost.localdomain

 Would the application work with a hosts entry like this?

 127.0.0.1hostname.dummy   localhost localhost.localdomain

 (Make sure you pick .dummy so as not to interfere with any other DNS.)

 In theory you could leave off .dummy, but then you risk hostname being
 completed with the search domain in resolv.conf, which creates the
 problems already mentioned with putting hostname.domain.com in
 /etc/hosts.  (I have not tested this at all!)

And giving it 127.0.0.1 would tell it others to ignore it, I think. Where
did your user come up with this idea - clearly, they have *no* clue what
they're doing, and need at least a brown bag lunch about TCP/IP, and they
should not be allowed to dictate this. Their idea is a bug, and needs to
be fixed.

  mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread m . roth
Sean Carolan wrote:
 (Make sure you pick .dummy so as not to interfere with any other DNS.)

 In theory you could leave off .dummy, but then you risk hostname being
 completed with the search domain in resolv.conf, which creates the
 problems already mentioned with putting hostname.domain.com in
 /etc/hosts.  (I have not tested this at all!)

 I will probably just leave this decision to the application
 architects, with the recommendation that we should simply use DNS as
 intended...

I wonder if what they need is another IP being asserted on your NIC

  mark

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Kai Schaetzl
Sean Carolan wrote on Mon, 7 Mar 2011 10:49:18 -0600:

 Indeed.  It does seem like a bad idea to have a single host using
 loopback, while the rest of the network refers to it by it's real IP
 address.

It doesn't matter for the other hosts, the sender ip address will always 
be the outgoing interface address and not the loopback. It only matters if 
you connect on the local host and have to troubleshoot a connectivity 
issue and confuse something ... Usually, it's rather an advantage because 
in cases where you would just get localhost you now get some meaningful 
name. It really depends. However, I have it set this way on all my hosts 
for ten years or more and haven't found a single case where this was a 
problem.

Kai


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Dr. Ed Morbius
We're looking for tools to be used in monitoring the PERC H800 arrays on
a set of database servers running CentOS 5.5.

We've installed most of the OMSA (Dell monitoring) suite.

Our current alerting is happening through SNMP, though it's a bit hit or
miss (we apparently missed a couple of earlier predictive failure alerts
on one drive).

OMSA conflicts with mega-cli, though we may find that the latter is the
more useful package.  Both are pretty byzantine, the Dell stuff simply
doesn't have docs (in particular: docs on how to interpret the omconfig
log output).

Ideally we'd like something which could be run as a Nagios plugin or
cron job providing information on RAID status and/or possible disk
errors.  Probably both, actually.

Thanks in advance.

-- 
Dr. Ed Morbius, Chief Scientist /|
  Robot Wrangler / Staff Psychologist| When you seek unlimited power
Krell Power Systems Unlimited|  Go to Krell!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 6 - What are you looking forward to?

2011-03-07 Thread James Nguyen
On Fri, Mar 4, 2011 at 12:11 PM, John R Pierce pie...@hogranch.com wrote:
 On 03/04/11 11:59 AM, m.r...@5-cent.us wrote:
 I'm looking forward to the new cgroups and KVM.  This will give it
   some capabilities similar to AIX virtual partitions which can divvy up
   CPUs at a fine resolution.
 Really? So IBM ported VM into native AIX? I missed that.

 IBM Power servers since the Power4+ CPU (they are up to Power7 now) have
 hardware partitioning support, commonly known as LPAR.  LPAR can be
 divided in units of 1/10th of a CPU.   The software to manage this is
 now called PowerVM (its been called other names in the past, not all
 polite).

 In addition, AIX 6.1 and newer have Workload Partitions (WPAR), which
 are similar to Solaris Zones, these allow subdividing an AIX install
 into an arbitrary number of apparently different systems that all share
 the same kernel.

 LPAR plus VIOS (Virtual IO System, actually a stripped down
 preconfigured AIX system) corresponds to the Xen model, however the base
 hypervisor capability is built right into the CPU and IO hardware, VIOS
 just provides management and optional virtualized IO.  You can assign IO
 adapters directly to partitions, whereupon the partitions (VMs) run even
 if VIOS is shut down.  The newer Power6 and 7 servers have Ethernet
 adapters that provide each LPAR with its own hardware-virtualized
 ethernet adapter so you don't need a cage full of cards, or run all the
 networking through VIOS.


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


This is why I'm not totally impressed with virtualization today, but
I've used it ions ago in enterprise solutions. =)  There's a reason
why IBM solutions are so expensive sides the amount of people they
staff on projects.  You also get technology that the industry never
new existed.

-- 
James H. Nguyen
CallFire :: Systems Architect
http://www.callfire.com
1.949.625.4263
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Eero Volotinen
2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
 We're looking for tools to be used in monitoring the PERC H800 arrays on
 a set of database servers running CentOS 5.5.

 We've installed most of the OMSA (Dell monitoring) suite.

 Our current alerting is happening through SNMP, though it's a bit hit or
 miss (we apparently missed a couple of earlier predictive failure alerts
 on one drive).

 OMSA conflicts with mega-cli, though we may find that the latter is the
 more useful package.  Both are pretty byzantine, the Dell stuff simply
 doesn't have docs (in particular: docs on how to interpret the omconfig
 log output).

 Ideally we'd like something which could be run as a Nagios plugin or
 cron job providing information on RAID status and/or possible disk
 errors.  Probably both, actually.

if your system supports omreport (comes with omsa) then this is good solution:
http://folk.uio.no/trondham/software/check_openmanage.html

--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Dr. Ed Morbius
on 22:57 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote:
 2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
  We're looking for tools to be used in monitoring the PERC H800 arrays on
  a set of database servers running CentOS 5.5.
 
  We've installed most of the OMSA (Dell monitoring) suite.
 
  Our current alerting is happening through SNMP, though it's a bit hit or
  miss (we apparently missed a couple of earlier predictive failure alerts
  on one drive).
 
  OMSA conflicts with mega-cli, though we may find that the latter is the
  more useful package.  Both are pretty byzantine, the Dell stuff simply
  doesn't have docs (in particular: docs on how to interpret the omconfig
  log output).
 
  Ideally we'd like something which could be run as a Nagios plugin or
  cron job providing information on RAID status and/or possible disk
  errors.  Probably both, actually.
 
 if your system supports omreport (comes with omsa) then this is good solution:
 http://folk.uio.no/trondham/software/check_openmanage.html

So ... this slots on top of OMSA to provide reporting?

-- 
Dr. Ed Morbius, Chief Scientist /|
  Robot Wrangler / Staff Psychologist| When you seek unlimited power
Krell Power Systems Unlimited|  Go to Krell!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Eero Volotinen
2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
 on 22:57 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote:
 2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
  We're looking for tools to be used in monitoring the PERC H800 arrays on
  a set of database servers running CentOS 5.5.
 
  We've installed most of the OMSA (Dell monitoring) suite.
 
  Our current alerting is happening through SNMP, though it's a bit hit or
  miss (we apparently missed a couple of earlier predictive failure alerts
  on one drive).
 
  OMSA conflicts with mega-cli, though we may find that the latter is the
  more useful package.  Both are pretty byzantine, the Dell stuff simply
  doesn't have docs (in particular: docs on how to interpret the omconfig
  log output).
 
  Ideally we'd like something which could be run as a Nagios plugin or
  cron job providing information on RAID status and/or possible disk
  errors.  Probably both, actually.

 if your system supports omreport (comes with omsa) then this is good 
 solution:
 http://folk.uio.no/trondham/software/check_openmanage.html

 So ... this slots on top of OMSA to provide reporting?

this plugin parsers omreport output and uses it for nagios output.

omsa webserver is not required, but working omreport cli is. .. works
great on my servers.

--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Blake Hudson
 Original Message  
Subject: [CentOS] Dell PERC H800 commandline RAID monitoring tools
From: Dr. Ed Morbius dredmorb...@gmail.com
To: CentOS User list centos@centos.org
Date: Monday, March 07, 2011 2:43:03 PM
 We're looking for tools to be used in monitoring the PERC H800 arrays on
 a set of database servers running CentOS 5.5.
If you purchased the server with an add-in DRAC, the DRAC can provide
email alerts if an array becomes degraded (or just about any other
hardware fault). This isn't necessarily a replacement for your current
monitoring, but it can be used to supplement or compliment it.

--Blake
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Dr. Ed Morbius
on 16:04 Mon 07 Mar, Blake Hudson (bl...@ispn.net) wrote:
  Original Message  
 Subject: [CentOS] Dell PERC H800 commandline RAID monitoring tools
 From: Dr. Ed Morbius dredmorb...@gmail.com
 To: CentOS User list centos@centos.org
 Date: Monday, March 07, 2011 2:43:03 PM
  We're looking for tools to be used in monitoring the PERC H800 arrays on
  a set of database servers running CentOS 5.5.

 If you purchased the server with an add-in DRAC, the DRAC can provide
 email alerts if an array becomes degraded (or just about any other
 hardware fault). This isn't necessarily a replacement for your current
 monitoring, but it can be used to supplement or compliment it.

The iDRAC /doesn't/ report on RAID / storage configuration or status.

iDRAC 6, Dell r610, onboard PERC H700, offboard PERC H800 (MD1200
array).  BIOS version 2.1.15, Firmware 1.54 (Build 15).

We get batteries, fans, intrusion, power, removable flash media, temps,
and volts, but not storage.o

The iDRAC is pretty good compared with some past Dell offerings.
Ability to boot virtual media in particular is very slick (I can specify
local removable storage or a drive image and mount it for booting /
diagnostics remotely).

But no RAID / storage management or monitoring.

-- 
Dr. Ed Morbius, Chief Scientist /|
  Robot Wrangler / Staff Psychologist| When you seek unlimited power
Krell Power Systems Unlimited|  Go to Krell!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Dr. Ed Morbius
on 12:43 Mon 07 Mar, Dr. Ed Morbius (dredmorb...@gmail.com) wrote:
 We're looking for tools to be used in monitoring the PERC H800 arrays on
 a set of database servers running CentOS 5.5.

Pardoning the self-reply, but one issue we've ahd is reconciling the
omcontrol log report with the Dell Server Manager syslog messages.

omcontrol reported a predictive drive failure, but we (and three Dell
storage/support techs) had trouble identifying which actual device was
being reporrted as bad.


From 'omconfig storage controller action=exportlog controller=0' output:

03/04/11 21:42:42: EVT#02959-03/04/11 21:42:42:  96=Predictive failure: PD 
00(e0x08/s2)
03/05/11 14:28:41: EVT#02961-03/05/11 14:28:41: 112=Removed: PD 00(e0x08/s2)

In /var/log/messages (timestamp/hostname trimmed):

Server Administrator: Storage Service EventID: 2243  The Patrol Read has 
stopped.:  Controller 0 (PERC H800 Adapter) 
Server Administrator: Storage Service EventID: 2049  Physical disk removed: 
 Physical Disk 0:0:2 Controller 0, Connector 0

The Server Administrator reports of a slot 2 failure correspond to the
drive which was physically replaced.

The OMSA omconfig report is throwing us a bunch of crud about some
device, but Dell variously identified it as slot 0 and slot 9.  We're
now getting from them that /s2 identifies slot 2.


Dell said point blank you're not going to have any luck with that as
far as documentation of the OMSA log report format and parsing being
documented.  Does anyone have a clue as to WTF it's actaully trying to
say, or what this tool is based off of (I'm suspecting mega-cli on a
general hunch but not much stronger).

Enterprise support  indeed.

-- 
Dr. Ed Morbius, Chief Scientist /|
  Robot Wrangler / Staff Psychologist| When you seek unlimited power
Krell Power Systems Unlimited|  Go to Krell!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Dr. Ed Morbius
on 23:15 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote:
 2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
  on 22:57 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote:
  2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
   We're looking for tools to be used in monitoring the PERC H800 arrays on
   a set of database servers running CentOS 5.5.
  
   We've installed most of the OMSA (Dell monitoring) suite.
  
   Our current alerting is happening through SNMP, though it's a bit hit or
   miss (we apparently missed a couple of earlier predictive failure alerts
   on one drive).
  
   OMSA conflicts with mega-cli, though we may find that the latter is the
   more useful package.  Both are pretty byzantine, the Dell stuff simply
   doesn't have docs (in particular: docs on how to interpret the omconfig
   log output).
  
   Ideally we'd like something which could be run as a Nagios plugin or
   cron job providing information on RAID status and/or possible disk
   errors.  Probably both, actually.
 
  if your system supports omreport (comes with omsa) then this is good 
  solution:
  http://folk.uio.no/trondham/software/check_openmanage.html
 
  So ... this slots on top of OMSA to provide reporting?
 
 this plugin parsers omreport output and uses it for nagios output.

Is it running/invoking omreport or relying on periodic runs?  I'll dig
through the docs but if you know this off-hand it'd be helpful.
 
 omsa webserver is not required, but working omreport cli is. .. works
 great on my servers.

Good to know, much appreciated.

-- 
Dr. Ed Morbius, Chief Scientist /|
  Robot Wrangler / Staff Psychologist| When you seek unlimited power
Krell Power Systems Unlimited|  Go to Krell!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dell PERC H800 commandline RAID monitoring tools

2011-03-07 Thread Eero Volotinen
2011/3/8 Dr. Ed Morbius dredmorb...@gmail.com:
 on 23:15 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote:
 2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
  on 22:57 Mon 07 Mar, Eero Volotinen (eero.voloti...@iki.fi) wrote:
  2011/3/7 Dr. Ed Morbius dredmorb...@gmail.com:
   We're looking for tools to be used in monitoring the PERC H800 arrays on
   a set of database servers running CentOS 5.5.
  
   We've installed most of the OMSA (Dell monitoring) suite.
  
   Our current alerting is happening through SNMP, though it's a bit hit or
   miss (we apparently missed a couple of earlier predictive failure alerts
   on one drive).
  
   OMSA conflicts with mega-cli, though we may find that the latter is the
   more useful package.  Both are pretty byzantine, the Dell stuff simply
   doesn't have docs (in particular: docs on how to interpret the omconfig
   log output).
  
   Ideally we'd like something which could be run as a Nagios plugin or
   cron job providing information on RAID status and/or possible disk
   errors.  Probably both, actually.
 
  if your system supports omreport (comes with omsa) then this is good 
  solution:
  http://folk.uio.no/trondham/software/check_openmanage.html
 
  So ... this slots on top of OMSA to provide reporting?

 this plugin parsers omreport output and uses it for nagios output.

 Is it running/invoking omreport or relying on periodic runs?  I'll dig
 through the docs but if you know this off-hand it'd be helpful.

It runs omreport each time nagios polls it via nrpe or snmp.

--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] email to web posting software?

2011-03-07 Thread Dave Stevens
Dear CentOS,

I have a user group that would like to be able to routinely post (easily) 
emails to a web site. Must be usable without special training. I have no 
experience with this. Anyone have a suggestion? LAMP stack installed.

Dave

-- 


When a respected information source covers something where you have on-the-
ground experience, the result is often to make you wonder how much fecal 
matter you've swallowed in areas outside your own expertise.

-- Rusty Russell
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] email to web posting software?

2011-03-07 Thread Dr. Ed Morbius
on 14:41 Mon 07 Mar, Dave Stevens (g...@uniserve.com) wrote:
 Dear CentOS,
 
 I have a user group that would like to be able to routinely post (easily) 
 emails to a web site. Must be usable without special training. I have no 
 experience with this. Anyone have a suggestion? LAMP stack installed.

https://posterous.com/

-- 
Dr. Ed Morbius, Chief Scientist /|
  Robot Wrangler / Staff Psychologist| When you seek unlimited power
Krell Power Systems Unlimited|  Go to Krell!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] email to web posting software?

2011-03-07 Thread John R Pierce
On 03/07/11 2:41 PM, Dave Stevens wrote:
 Dear CentOS,

 I have a user group that would like to be able to routinely post (easily)
 emails to a web site. Must be usable without special training. I have no
 experience with this. Anyone have a suggestion? LAMP stack installed.

you mean, like a web archive of the mails sent to a specific address?

you might look at hypermail.

this is a hypermail archive of the current month of a discussion list.
http://observers.org/tac.mailing.list/2011/Mar/


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] email to web posting software?

2011-03-07 Thread Frank Cox
On Mon, 07 Mar 2011 14:41:03 -0800
Dave Stevens wrote:

 I have a user group that would like to be able to routinely post (easily) 
 emails to a web site. Must be usable without special training. I have no 
 experience with this. Anyone have a suggestion? LAMP stack installed.

I did this a while back (and will probably be doing  it again on another project
shortly).

I just wrote a little program that reads whatever is sent to its email address.
It checks for a password in the subject line and then formats and embeds the
email content on a webpage.  While it's not the most secure way of doing
things, it's the most painless way I can think of to provide (very)
non-technical users with a method to post the local swim meet schedule or
whatever. Some of the users who need to be able to update these kinds of web
pages don't have their own computers or anything, so this way all they need is
access to any webmail service and they can still do it from anywhere.

The program is as it exists right now is pretty special-purpose as it has the
webpage template and passwords built-in so it's not of much general interest at
the moment. Perhaps re-implementing it for this new project is a good reason to
break some of that stuff out into config files instead.  I also just woke up to
the fact that I could simplify it a lot by using procmail to feed the email to
it and crank up the program.  At the moment I read the mail spool directly and
invoke the program with cron, which isn't the best way to do it.

-- 
MELVILLE THEATRE ~ Melville Sask ~ www.melvilletheatre.com
www.creekfm.com - FIFTY THOUSAND WATTS of POW WOW POWER!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Keith Keller
On Mon, Mar 07, 2011 at 09:31:17PM +0100, Kai Schaetzl wrote:
 
 Usually, it's rather an advantage because 
 in cases where you would just get localhost you now get some meaningful 
 name.

You can use the bare hostname as an alias in /etc/hosts, which is
probably marginally better than using the FQDN.

In CentOS, I believe that rc.sysinit will try to set the hostname from
its FQDN (or whatever you have set in /etc/sysconfig/network) without
mucking about with /etc/hosts.

--keith


-- 
kkel...@wombat.san-francisco.ca.us



pgplTIQhXvU5s.pgp
Description: PGP signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] email to web posting software?

2011-03-07 Thread Dave Stevens
On Monday, March 07, 2011 02:41:03 pm Dave Stevens wrote:
 Dear CentOS,
 
 I have a user group that would like to be able to routinely post (easily)
 emails to a web site. Must be usable without special training. I have no
 experience with this. Anyone have a suggestion? LAMP stack installed.
 
 Dave

Thanks. I'll need to sort through this a bit. Frank if you do generalize your 
work so others can use it will you let us know please?

dave

-- 


When a respected information source covers something where you have on-the-
ground experience, the result is often to make you wonder how much fecal 
matter you've swallowed in areas outside your own expertise.

-- Rusty Russell
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Robert Spangler
On Monday 07 March 2011 15:22, the following was written:

  Keith Keller wrote:
   On Mon, Mar 07, 2011 at 10:34:24AM -0600, Sean Carolan wrote:
   Can anyone point out reasons why it might be a bad idea to put this
   sort of line in your /etc/hosts file, eg, pointing the FQDN at the
   loopback address?
  
   127.0.0.1hostname.domain.com hostname   localhost
   localhost.localdomain

You can do this if you want.  The host file is only used by the machine it is 
on.  As to bad Idea it would depend on what you are trying to do and if the 
process you are trying to reach locally is listening on that ip address.

I have only the short name configured on 127.0.0.1

   Would the application work with a hosts entry like this?

If the process what configured to listen on that interface, yes.

   127.0.0.1hostname.dummy   localhost localhost.localdomain
  
   (Make sure you pick .dummy so as not to interfere with any other DNS.)

Why do you need the '.dummy'? short name should work fine.

   In theory you could leave off .dummy, but then you risk hostname being
   completed with the search domain in resolv.conf, which creates the
   problems already mentioned with putting hostname.domain.com in
   /etc/hosts.  (I have not tested this at all!)

Resolv.conf is not used for the hosts file, it is used for DNS.  I have my 
short name configured to the lo interface and the FQDN to the real ip 
address.  If I ping the short name I get this:

etc $ ping -c 3 bms
PING bms (127.0.0.1) 56(84) bytes of data.
64 bytes from bms (127.0.0.1): icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from bms (127.0.0.1): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from bms (127.0.0.1): icmp_seq=3 ttl=64 time=0.037 ms

If I ping the FQDN I get this:

etc $ ping -c 3 bms.domain.com
PING bms.domain.com (x.x.x.x) 56(84) bytes of data.
64 bytes from bms.domain.com (x.x.x.x): icmp_seq=1 ttl=64 time=0.037 ms
64 bytes from bms.domain.com (x.x.x.x): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from bms.domain.com (x.x.x.x): icmp_seq=3 ttl=64 time=0.093 ms


  And giving it 127.0.0.1 would tell it others to ignore it, I think. Where
  did your user come up with this idea - clearly, they have *no* clue what
  they're doing, and need at least a brown bag lunch about TCP/IP, and they
  should not be allowed to dictate this. Their idea is a bug, and needs to
  be fixed.

How do you figure this?  The hosts file is ONLY used locally.  If someone is 
looking you up they are using DNS if they don't have you configured 
in their hosts file.

Their idea might be flaws but it is not bugs.


-- 

Regards
Robert

Linux
The adventure of a lifetime.

Linux User #296285
Get Counted
http://counter.li.org/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Nico Kadel-Garcia
On Mon, Mar 7, 2011 at 11:34 AM, Sean Carolan scaro...@gmail.com wrote:
 Can anyone point out reasons why it might be a bad idea to put this
 sort of line in your /etc/hosts file, eg, pointing the FQDN at the
 loopback address?

 127.0.0.1    hostname.domain.com hostname   localhost localhost.localdomain
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


It's common to do so, so that the network lookups for hostname still
operate, even if the rest of the network is dead. This is particularly
important for self-monitoring, sendmail (which relies on the FQHN
beinf first, mind you!!!) and X Windows.

If you have an intermittent network connection, such as one for a DSL
connected device or a roving wireless connection, keeping the hostname
in the 127.0.0.1 line helps assure that the X sessions work, even when
other connections are interrupted. It also helps improve performance
for local network access and keeps your external ports uncluttered by
local CIFS and NFS access.

That said, it can be problematic when you ping $HOSTNAME and get a
valid 127.0.0.1 response, and haven't actually tested your external
port. It also requires thought for configuring SSH and SNMP and NFS to
allow localhost access.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] yum tries to install a mix of architectures

2011-03-07 Thread Nico Kadel-Garcia
On Mon, Mar 7, 2011 at 9:41 AM, Tim Dunphy bluethu...@gmail.com wrote:
 Hello,

  On my centos boxes whenever I try to install packages I get a mix of
 packages from the repos that are both i386 and x86_64 in
 archictecture:

Jump to CentOS 6. Wait, that's not out yet. Buy an RHEL 6 license or
test with Scientic Linux 6 until CentOS 6 is out. The default behavior
of yum has changed, and it's just safier and easier to work with an
architecture that does the more selective thing by deffault.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] keepalived+LVS

2011-03-07 Thread bedo
hello,
  all!
if i want to use lvs function of keepalived , i must install ipvsadm ?
tks!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] keepalived+LVS

2011-03-07 Thread Barry Brimer
 hello,
  all!
 if i want to use lvs function of keepalived , i must install ipvsadm ?
 tks!

I haven't used keepalived with lvs in ages, but I believe it works 
directly with the kernel, and therefore does not strictly require 
ipvsadm.  Please note that ipvsadm is a userspace tool for 
manipulating/querying ipvs entries and is very useful.  I've never used 
lvs on any machine where I didn't also install ipvsadm.

Hope this helps,
Barry
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] keepalived+LVS

2011-03-07 Thread Steve Barnes
   all!
 if i want to use lvs function of keepalived , i must install ipvsadm ?
 tks!
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos


[steve@mail ~]$ yum provides '*/ipvsadm'
Loaded plugins: fastestmirror
addons  

|  951 B
00:00
base

| 2.1 kB
00:00
extras  

| 2.1 kB
00:00
updates 

| 1.9 kB
00:00
ipvsadm-1.24-10.x86_64 : Utility to administer the Linux Virtual Server
Repo: base
Matched from:
Filename: /etc/rc.d/init.d/ipvsadm
Filename: /sbin/ipvsadm

I use keepalived/lvs. Yes, you need to install it. Otherwise, there's no way 
for you to manage the lvs function?

At least, that's what I've been led to believe...

Cheers

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] /etc/hosts - hostname alias for 127.0.0.1

2011-03-07 Thread Robert Nichols
On 03/07/2011 08:21 PM, Nico Kadel-Garcia wrote:

 That said, it can be problematic when you ping $HOSTNAME and get a
 valid 127.0.0.1 response, and haven't actually tested your external
 port. It also requires thought for configuring SSH and SNMP and NFS to
 allow localhost access.

When you ping the IP address of your external link, that packet gets
short-circuited in the kernel and never goes to the physical port,
so you aren't testing your external port for that case either.

-- 
Bob Nichols NOSPAM is really part of my email address.
 Do NOT delete it.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] keepalived+LVS

2011-03-07 Thread bedo
thanks for relay!
if i only use ha+lvs configuration of keepalived.the  load balance not work.
then,i install ipvsadm and setup lvs with tun by ipvsadm ,it's work.

command line below:
ipvsadm -A -t 172.16.39.100:80 -s rr
ipvsadm -a -t 172.16.39.100:80 -r 172.16.39.30:80 -i
ipvsadm -a -t 172.16.39.100:80 -r 172.16.39.40:80 -i


-lb server
(master) keepalived.conf
global_defs {
router_id LVS_DEVEL_M
}

vrrp_instance websev {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1

authentication {
auth_type PASS
auth_pass 
}

virtual_ipaddress {
172.16.39.100
}
}

virtual_server 172.16.39.100 80 {
delay_loop 6
lb_algo rr
lb_kind TUN
persistence_timeout 10
protocol TCP

real_server 172.16.39.30 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.16.39.40 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
---real server 1
command --
   ifconfig tunl0 172.16.39.100 netmask 255.255.255.0 up
   route add -host 172.16.39.100 dev tunl0
   echo 1 /proc/sys/net/ipv4/conf/tunl0/arp_ignore
   echo 2 /proc/sys/net/ipv4/conf/tunl0/arp_announce
   echo 1 /proc/sys/net/ipv4/conf/all/arp_ignore
   echo 2 /proc/sys/net/ipv4/conf/all/arp_announce
   sysctl -p
--


2011/3/8 Steve Barnes st...@echo.id.au

all!
  if i want to use lvs function of keepalived , i must install ipvsadm ?
  tks!
  ___
  CentOS mailing list
  CentOS@centos.org
  http://lists.centos.org/mailman/listinfo/centos
 

 [steve@mail ~]$ yum provides '*/ipvsadm'
 Loaded plugins: fastestmirror
 addons

  |  951 B
 00:00
 base

  | 2.1 kB
 00:00
 extras

  | 2.1 kB
 00:00
 updates

 | 1.9 kB
 00:00
 ipvsadm-1.24-10.x86_64 : Utility to administer the Linux Virtual Server
 Repo: base
 Matched from:
 Filename: /etc/rc.d/init.d/ipvsadm
 Filename: /sbin/ipvsadm

 I use keepalived/lvs. Yes, you need to install it. Otherwise, there's no
 way for you to manage the lvs function?

 At least, that's what I've been led to believe...

 Cheers

 Steve

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] keepalived+LVS

2011-03-07 Thread Steve Barnes
 if i only use ha+lvs configuration of keepalived.the  load balance not work.
 then,i install ipvsadm and setup lvs with tun by ipvsadm ,it's work.
 command line below:
 ipvsadm -A -t http://172.16.39.100:80 172.16.39.100:80
 -s rr
 ipvsadm -a -t http://172.16.39.100:80 172.16.39.100:80
 -r http://172.16.39.30:80 172.16.39.30:80
 -i
 ipvsadm -a -t http://172.16.39.100:80 172.16.39.100:80
 -r http://172.16.39.40:80 172.16.39.40:80
 -i
 -lb server 
 (master) keepalived.conf
 global_defs {
     router_id LVS_DEVEL_M
 }
 vrrp_instance websev {
     state MASTER
     interface eth0
     virtual_router_id 51
     priority 100
     advert_int 1
     authentication {
     auth_type PASS
     auth_pass 
     }
     virtual_ipaddress {
     172.16.39.100
     }
 }
 virtual_server 172.16.39.100 80 {
     delay_loop 6
     lb_algo rr
     lb_kind TUN
     persistence_timeout 10
     protocol TCP
     real_server 172.16.39.30 80 {
     weight 1
     TCP_CHECK {
     connect_timeout 3
     nb_get_retry 3
     delay_before_retry 3
     }
     }
     real_server 172.16.39.40 80 {
     weight 1
     TCP_CHECK {
     connect_timeout 3
     nb_get_retry 3
     delay_before_retry 3
     }
     }
 }
 ---real server 1 
 command --
    ifconfig tunl0 172.16.39.100 netmask 255.255.255.0 up
    route add -host 172.16.39.100 dev tunl0
    echo 1 /proc/sys/net/ipv4/conf/tunl0/arp_ignore
    echo 2 /proc/sys/net/ipv4/conf/tunl0/arp_announce
    echo 1 /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 2 /proc/sys/net/ipv4/conf/all/arp_announce
    sysctl -p

Assuming I understand you correctly, and assuming you have an init.d script in 
place, run this command:

 grep daemon /etc/init.d/keepalived

Odds are, you're editing /usr/local/etc/keepalived.conf, but the init.d script 
starts keepalived and tells it to use /etc/keepalived.conf

?

Steve

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] keepalived+LVS

2011-03-07 Thread bedo
no,my mean is the keepalived lvs need ipvsadm

2011/3/8 Steve Barnes st...@echo.id.au

  if i only use ha+lvs configuration of keepalived.the  load balance not
 work.
  then,i install ipvsadm and setup lvs with tun by ipvsadm ,it's work.
  command line below:
  ipvsadm -A -t http://172.16.39.100:80 172.16.39.100:80
  -s rr
  ipvsadm -a -t http://172.16.39.100:80 172.16.39.100:80
  -r http://172.16.39.30:80 172.16.39.30:80
  -i
  ipvsadm -a -t http://172.16.39.100:80 172.16.39.100:80
  -r http://172.16.39.40:80 172.16.39.40:80
  -i
  -lb server
 (master) keepalived.conf
  global_defs {
  router_id LVS_DEVEL_M
  }
  vrrp_instance websev {
  state MASTER
  interface eth0
  virtual_router_id 51
  priority 100
  advert_int 1
  authentication {
  auth_type PASS
  auth_pass 
  }
  virtual_ipaddress {
  172.16.39.100
  }
  }
  virtual_server 172.16.39.100 80 {
  delay_loop 6
  lb_algo rr
  lb_kind TUN
  persistence_timeout 10
  protocol TCP
  real_server 172.16.39.30 80 {
  weight 1
  TCP_CHECK {
  connect_timeout 3
  nb_get_retry 3
  delay_before_retry 3
  }
  }
  real_server 172.16.39.40 80 {
  weight 1
  TCP_CHECK {
  connect_timeout 3
  nb_get_retry 3
  delay_before_retry 3
  }
  }
  }
  ---real
 server 1 command
 --
 ifconfig tunl0 172.16.39.100 netmask 255.255.255.0 up
 route add -host 172.16.39.100 dev tunl0
 echo 1 /proc/sys/net/ipv4/conf/tunl0/arp_ignore
 echo 2 /proc/sys/net/ipv4/conf/tunl0/arp_announce
 echo 1 /proc/sys/net/ipv4/conf/all/arp_ignore
 echo 2 /proc/sys/net/ipv4/conf/all/arp_announce
 sysctl -p

 Assuming I understand you correctly, and assuming you have an init.d script
 in place, run this command:

 grep daemon /etc/init.d/keepalived

 Odds are, you're editing /usr/local/etc/keepalived.conf, but the init.d
 script starts keepalived and tells it to use /etc/keepalived.conf

 ?

 Steve

 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] keepalived+LVS

2011-03-07 Thread Steve Barnes
 no,my mean is the keepalived lvs need ipvsadm

Ah right. Sorry, I thought you were having more problems :)

Steve


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] BUG: soft lockup CPU stuck for 10seconds (Server went down)

2011-03-07 Thread Roland RoLaNd

i couldn't connect to it to check through ssh.
and when it comes to the console, i couldn't do anything either as a repetitive 
output as the screen shot attached keeps appearing.

this is an internal testing server with apache and mysql installed.
usually the load average is 8 % max. 

server's specs:

Intel Core i7-950 3.06GHz 8MB Quad Core 4.8GT/s
8 GB of RAM

it's worth mentioning that i have software raid and LVM set on it.
iv'e reach online that the weekly raid schedule may cause this though i cant 
seem to b sure of that any advice how to check?




 Date: Mon, 7 Mar 2011 09:24:16 -0800
 From: jpelt...@sfu.ca
 To: centos@centos.org
 Subject: Re: [CentOS] BUG: soft lockup CPU stuck for 10seconds (Server went 
 down)
 
 - Original Message -
 | Hello,
 | 
 | Today my server stopped responding.
 | i went to the console and on the screen there were a continuous loop
 | of the following info shown on the screen:
 | 
 | BUG: soft lockup - CPU#0 stuck for 10s! [java:13959]
 | 
 | and alot of other information.
 | ii've took a screen shot of the info shown , you can find it under the
 | following url: http://img585.imageshack.us/i/img00012201103070833.jpg/
 | and had to hard reset for it to be back up and running.
 | 
 | i tried googling with no luck for direct relevant info.
 | so hoping you can help out
 | 
 | Thanks,
 | 
 | --Roland
 | 
 | ___
 | CentOS mailing list
 | CentOS@centos.org
 | http://lists.centos.org/mailman/listinfo/centos
 
 This is likely due to thread deadlock.  What is the load on the machine at 
 the time that this error occurs?  I've seen this very error when running The 
 Mathworks Distributed Computing Toolbox server.  The machine would become 
 very unsettled when it tried to run on more than four 2 CPUs.  How many 
 CPUs are in this system?
 
 -- 
 James A. Peltier
 IT Services - Research Computing Group
 Simon Fraser University - Burnaby Campus
 Phone   : 778-782-6573
 Fax : 778-782-3045
 E-Mail  : jpelt...@sfu.ca
 Website : http://www.sfu.ca/itservices
   http://blogs.sfu.ca/people/jpeltier
 
 
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
  ___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos