Re: [CentOS-docs] Some questions: Release Notes CentOS 5.6

2011-04-15 Thread David Hrbáč
Dne 14.4.2011 11:31, Timothy Lee napsal(a):
 No objections. But please add a link for the English translation (yes,
 even inside the English page), so that a person on the translated page 
 can access the English version if necessary.

 Regards,
 Timothy Lee

I have just updated the translation list. It's sorted by ISO country
code, contains only localised language names. I wish we could do the
same with the ISOs sums.
DH
___
CentOS-docs mailing list
CentOS-docs@centos.org
http://lists.centos.org/mailman/listinfo/centos-docs


Re: [CentOS-docs] Some questions: Release Notes CentOS 5.6

2011-04-15 Thread Hardy Beltran Monasterios
El 15/04/11 04:44, David Hrbáč escribió:
 Dne 14.4.2011 11:31, Timothy Lee napsal(a):
 No objections. But please add a link for the English translation (yes,
 even inside the English page), so that a person on the translated page
 can access the English version if necessary.

 Regards,
 Timothy Lee
 I have just updated the translation list. It's sorted by ISO country
 code, contains only localised language names.
Fine. Now, how we use the marks #begin/#end-translation ?

   I wish we could do the
 same with the ISOs sums.
 DH
+1

Thanks !


-- 

Hardy Beltran Monasterios


___
CentOS-docs mailing list
CentOS-docs@centos.org
http://lists.centos.org/mailman/listinfo/centos-docs


[CentOS-docs] WebSite V2 - progress

2011-04-15 Thread Marian Marinov
Hello guys,
we have done some progress on the new web site project.
We need your comments for the design of the front page.
We have 3 proposals or the design of the frontpage.

Please look at them, we need your help :)

  http://qaweb.dev.centos.org/websitever2/

Best regards,
Marian


signature.asc
Description: This is a digitally signed message part.
___
CentOS-docs mailing list
CentOS-docs@centos.org
http://lists.centos.org/mailman/listinfo/centos-docs


[CentOS-announce] CEEA-2011:0353 CentOS 5 i386 tzdata Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Enhancement Advisory 2011:0353 

Upstream details at : https://rhn.redhat.com/errata/RHEA-2011-0353.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

i386:
4af04eaafd692331c48fcf122a38284f  tzdata-2011b-1.el5.i386.rpm
5f7138d5849e3b6028c9cb101d1cb259  tzdata-java-2011b-1.el5.i386.rpm

Source:
3f417d0f9bbd3db915215f73d431ba24  tzdata-2011b-1.el5.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CESA-2011:0337 Important CentOS 5 i386 vsftpd Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Security Advisory 2011:0337 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2011-0337.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

i386:
b33b25344560f84dd12a82047345f3fa  vsftpd-2.0.5-16.el5_6.1.i386.rpm

Source:
4b26382df00baac076b9b1223ec5879b  vsftpd-2.0.5-16.el5_6.1.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CESA-2011:0373 Important CentOS 5 i386 xulrunner Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Security Advisory 2011:0373 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2011-0373.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

i386:
c01c86a863ec7724d5351776d62450e4  xulrunner-1.9.2.15-2.el5_6.i386.rpm
7ad2fc585cdaa70370e42a6b9c99badb  xulrunner-devel-1.9.2.15-2.el5_6.i386.rpm

Source:
01f4e3280bce91d44086da118bca7483  xulrunner-1.9.2.15-2.el5_6.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CEBA-2011:0342 CentOS 5 x86_64 xen Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Bugfix Advisory 2011:0342 

Upstream details at : https://rhn.redhat.com/errata/RHBA-2011-0342.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

x86_64:
4a178e1740228495b3f112d842663a8d  xen-3.0.3-120.el5_6.1.x86_64.rpm
e3b3dc5264744ecdfb9cd0708ef2e837  xen-devel-3.0.3-120.el5_6.1.i386.rpm
36d0430ad86d225b46d6264d280a7695  xen-devel-3.0.3-120.el5_6.1.x86_64.rpm
aa3cd008cc1e93e2723991b9fd9a0d80  xen-libs-3.0.3-120.el5_6.1.i386.rpm
3fe2274061f36fc728c8229356f1bb80  xen-libs-3.0.3-120.el5_6.1.x86_64.rpm

Source:
c806b3cf462b996eca62d0fbf645e892  xen-3.0.3-120.el5_6.1.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CEBA-2011:0359 CentOS 5 i386 xulrunner Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Bugfix Advisory 2011:0359 

Upstream details at : https://rhn.redhat.com/errata/RHBA-2011-0359.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

i386:
31ed6ca2e213bed33847e6f2e116695b  xulrunner-1.9.2.15-1.el5_6.i386.rpm
5b2f95c2bdb79df3c599b1c2b5a99e1d  xulrunner-devel-1.9.2.15-1.el5_6.i386.rpm

Source:
e71ce8e3909e2c8a5e0e63e8b6ab8225  xulrunner-1.9.2.15-1.el5_6.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CESA-2011:0337 Important CentOS 5 x86_64 vsftpd Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Security Advisory 2011:0337 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2011-0337.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

x86_64:
04d76e94af7d17ddefdde420eb3a8d97  vsftpd-2.0.5-16.el5_6.1.x86_64.rpm

Source:
4b26382df00baac076b9b1223ec5879b  vsftpd-2.0.5-16.el5_6.1.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CESA-2011:0370 Moderate CentOS 5 i386 wireshark Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Security Advisory 2011:0370 Moderate

Upstream details at : https://rhn.redhat.com/errata/RHSA-2011-0370.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

i386:
712eb48747851eaec39a295af3106151  wireshark-1.0.15-1.el5_6.4.i386.rpm
d0aa9b76d3e78f19e7c18bf4013c5dee  wireshark-gnome-1.0.15-1.el5_6.4.i386.rpm

Source:
8559e3dab21c1aa520a12da6842f82da  wireshark-1.0.15-1.el5_6.4.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CEEA-2011:0353 CentOS 5 x86_64 tzdata Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Enhancement Advisory 2011:0353 

Upstream details at : https://rhn.redhat.com/errata/RHEA-2011-0353.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

x86_64:
6fa55e58d791036867edfa187564000f  tzdata-2011b-1.el5.x86_64.rpm
859ae882baf8fc4055ef34a0b01a820b  tzdata-java-2011b-1.el5.x86_64.rpm

Source:
3f417d0f9bbd3db915215f73d431ba24  tzdata-2011b-1.el5.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CEEA-2011:0378 CentOS 5 x86_64 tzdata Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Enhancement Advisory 2011:0378 

Upstream details at : https://rhn.redhat.com/errata/RHEA-2011-0378.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

x86_64:
619a293df04db258d71f8f4643ee261c  tzdata-2011d-1.el5.x86_64.rpm
f5f7ef1e23e908faa2a9f06f67cad0d9  tzdata-java-2011d-1.el5.x86_64.rpm

Source:
339790ce9be9bd871422f12fb549eff2  tzdata-2011d-1.el5.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CESA-2011:0373 Important CentOS 5 x86_64 xulrunner Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Security Advisory 2011:0373 Important

Upstream details at : https://rhn.redhat.com/errata/RHSA-2011-0373.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

x86_64:
db845cbac7ed905e44bdbe38a805bf73  xulrunner-1.9.2.15-2.el5_6.i386.rpm
78cea92b4ec910716a06dfe714b26825  xulrunner-1.9.2.15-2.el5_6.x86_64.rpm
14b303569b3cab60fc97fd6e6573e18e  xulrunner-devel-1.9.2.15-2.el5_6.i386.rpm
3ff3d738f81c1f2084df434fc139be3c  xulrunner-devel-1.9.2.15-2.el5_6.x86_64.rpm

Source:
01f4e3280bce91d44086da118bca7483  xulrunner-1.9.2.15-2.el5_6.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


[CentOS-announce] CEBA-2011:0342 CentOS 5 i386 xen Update

2011-04-15 Thread Karanbir Singh

CentOS Errata and Bugfix Advisory 2011:0342 

Upstream details at : https://rhn.redhat.com/errata/RHBA-2011-0342.html

The following updated files have been uploaded and are currently 
syncing to the mirrors: ( md5sum Filename ) 

i386:
7e2ed18dc6317fdaacfd2ecedf0e0164  xen-3.0.3-120.el5_6.1.i386.rpm
888dccb4b245413d4584b69ae896665f  xen-devel-3.0.3-120.el5_6.1.i386.rpm
e9e2e6a93b491f898feef7e777f2d9c9  xen-libs-3.0.3-120.el5_6.1.i386.rpm

Source:
c806b3cf462b996eca62d0fbf645e892  xen-3.0.3-120.el5_6.1.src.rpm


-- 
Karanbir Singh
CentOS Project { http://www.centos.org/ }
irc: z00dax, #cen...@irc.freenode.net

___
CentOS-announce mailing list
CentOS-announce@centos.org
http://lists.centos.org/mailman/listinfo/centos-announce


Re: [CentOS-es] Duda con dhcpd

2011-04-15 Thread Fernando Rojas de la Torre
On 12/04/11 22:22, Ing. Ernesto Pérez Estévez wrote:
 Fernando Rojas de la Torre wrote:
 ¿que es lo que puede ocasionar que a pesar que tengo especificado un
 cliente con

 lo asigna otro servidor de dhcp?

 host xen.main {
hardware ethernet xx:xx:xx:xx:xx:xx;
fixed-address 10.10.10.10;
}

 el dhcp le asigne la misma ip a otro cliente?

 Estoy completamente seguro que no se fijo manualmente la ip


 ___
 CentOS-es mailing list
 CentOS-es@centos.org
 http://lists.centos.org/mailman/listinfo/centos-es
 ___
 CentOS-es mailing list
 CentOS-es@centos.org
 http://lists.centos.org/mailman/listinfo/centos-es
No. El mismo servidor me lo marca en dhcpd.leases. Digamos que el 
servidor es 10.10.10.254

Nunca me había sucedido.

Cuando el 10.10.10.10 intenta conectarse, /var/log/messages indica 
dhcpdecline
___
CentOS-es mailing list
CentOS-es@centos.org
http://lists.centos.org/mailman/listinfo/centos-es


[CentOS-es] Cluster del host identico

2011-04-15 Thread Maykel Franco Hernandez


Hola muy buenas, quería saber si existe algún software que te
replique todos los cambios que hagas en un servidor, en otro. Sé que hay
herramientas como drbd con el sistema de ficheros ocfs2 o gfs que
inclusive se pueden poner los nodos como los dos primarios para poder
escribir en la misma partición simultaneamente pero me refiero a
replicar el servidor en otro, todos los cambios que se realicen en uno,
que se hagan en el otro. No sé si esto existe en centos, o si existe y
no es opensource pero me estoy mudando de ubuntu/debian a Centos porque
la verdad es que en cuanto a servicios me parece muy rapido la manera de
administrarlos y de como los administra el sistema. 

Un saludo.
___
CentOS-es mailing list
CentOS-es@centos.org
http://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS-es] Duda con dhcpd

2011-04-15 Thread Eduardo Grosclaude
2011/4/14 Fernando Rojas de la Torre fernando.ro...@uniondetula.gob.mx:
 On 12/04/11 22:22, Ing. Ernesto Pérez Estévez wrote:
 Fernando Rojas de la Torre wrote:
 ¿que es lo que puede ocasionar que a pesar que tengo especificado un
 cliente con

 lo asigna otro servidor de dhcp?

 host xen.main {
            hardware ethernet xx:xx:xx:xx:xx:xx;
            fixed-address 10.10.10.10;
            }

 el dhcp le asigne la misma ip a otro cliente?

 Estoy completamente seguro que no se fijo manualmente la ip

Las direcciones MAC son diferentes? Porque veo que se trata de un host
Xen... el otro cliente no será un guest del mismo, o de otro host que
está configurado idénticamente?



 ___
 CentOS-es mailing list
 CentOS-es@centos.org
 http://lists.centos.org/mailman/listinfo/centos-es
 ___
 CentOS-es mailing list
 CentOS-es@centos.org
 http://lists.centos.org/mailman/listinfo/centos-es
 No. El mismo servidor me lo marca en dhcpd.leases. Digamos que el
 servidor es 10.10.10.254

 Nunca me había sucedido.

 Cuando el 10.10.10.10 intenta conectarse, /var/log/messages indica
 dhcpdecline
 ___
 CentOS-es mailing list
 CentOS-es@centos.org
 http://lists.centos.org/mailman/listinfo/centos-es




-- 
Eduardo Grosclaude
Universidad Nacional del Comahue
Neuquen, Argentina
___
CentOS-es mailing list
CentOS-es@centos.org
http://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS-es] Cluster del host identico

2011-04-15 Thread Oscar Osta Pueyo
Hola,

2011/4/15 Maykel Franco Hernandez may...@maykel.sytes.net:


 Hola muy buenas, quería saber si existe algún software que te
 replique todos los cambios que hagas en un servidor, en otro. Sé que hay
 herramientas como drbd con el sistema de ficheros ocfs2 o gfs que
 inclusive se pueden poner los nodos como los dos primarios para poder
 escribir en la misma partición simultaneamente pero me refiero a
 replicar el servidor en otro, todos los cambios que se realicen en uno,
 que se hagan en el otro. No sé si esto existe en centos, o si existe y
 no es opensource pero me estoy mudando de ubuntu/debian a Centos porque
 la verdad es que en cuanto a servicios me parece muy rapido la manera de
 administrarlos y de como los administra el sistema.

Lo único que conozco es http://spacewalk.redhat.com/

* Inventory your systems (hardware and software information)
* Install and update software on your systems
* Collect and distribute your custom software packages into
manageable groups
* Provision (kickstart) your systems
* Manage and deploy configuration files to your systems
* Monitor your systems
* Provision and start/stop/configure virtual guests
* Distribute content across multiple geographical sites in an
efficient manner.

Suerte!!!

-- 
Oscar Osta Pueyo
oostap.lis...@gmail.com
_kiakli_
___
CentOS-es mailing list
CentOS-es@centos.org
http://lists.centos.org/mailman/listinfo/centos-es


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Thursday, April 14, 2011 11:26 PM, Benjamin Franz wrote:
 On 04/14/2011 08:04 AM, Christopher Chan wrote:

 Then try both for your use case and your hardware. We have wide raid6 setups
 that does well over 500 MB/s write (that is: not all raid6 writes suck...).

 /me replaces all of Peter's cache with 64MB modules.

 Let's try again.

 If you are trying to imply that RAID6 can't go fast when write size is
 larger than the cache, you are simply wrong. Even with just a 8 x RAID6,
 I've tested a system as sustained sequential (not burst) 156Mbytes/s out
 and 387 Mbytes/s in using 7200 rpm 1.5 TB drives. Bonnie++ results
 attached. Bonnie++ by default uses twice as much data as your available
 RAM to make sure you aren't just seeing cache. IOW: That machine only
 had 4GB of RAM and 256 MB of controller cache during the test but wrote
 and read 8 GB of data for the tests.

Wanna try that again with 64MB of cache only and tell us whether there 
is a difference in performance?

There is a reason why 3ware 85xx cards were complete rubbish when used 
for raid5 and which led to the 95xx/96xx series.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Thursday, April 14, 2011 11:19 PM, Ray Van Dolson wrote:
 On Thu, Apr 14, 2011 at 10:44:00PM +0800, Christopher Chan wrote:
 On Thursday, April 14, 2011 10:11 PM, Ray Van Dolson wrote:
 On Thu, Apr 14, 2011 at 10:07:55PM +0800, Emmanuel Noobadmin wrote:
 On 4/14/11, John R Piercepie...@hogranch.com   wrote:
 since this is the centos list, I really didn't want to suggest this, but
 if I was building a 20 or 40TB or whatever storage server, I do believe
 I'd be strongly consider using Solaris, or one of its variants like
 OpenIndiana, with ZFS.

 ZFS was engineered from the ground up to scale to zetabytes

 I was actually considering this but then came news that Oracle was
 killing OpenSolaris and likely to be pushing OCFS so decided I
 probably don't want to have something come bite me a year or two down
 the road. I'm not sure how things developed since then though.

 But based on your recommendation and Christopher Chan's, it would seem
 like you guys don't think that long term support/updates would be an
 issue for ZFS?

 ZFS and OCFS play in different spaces.  And ZFS is going nowhere... if
 you want to use on an open OS, OpenIndiana may be a good bet, but
 you're best short-term / mature option would be Nexenta or Solaris
 Express.


 Huh? What gives Nexenta a better advantage over OpenIndiana? They are
 both in the same boat. Both will have to migrate to illumos and move
 away from the last OpenSolaris ON release. Oh, Nexenta has a company
 backing it? Makes no different when both projects will be using the same
 core image. Now, if OpenIndiana resists using illumos, then you will
 have a case for Nexenta over OpenIndiana.

 OpenIndiana is in their what, first release?  I don't think that
 Nexenta 3.x is based on it *yet*.

 Both will eventually converge.

 In the meantime, yes, for storage needs I'd go with Nexenta for the
 reasons you mentioned. :)

Hardy userland, gcc compiled and gnu linked...hmm...I'll give Nexenta a 
shot after they start basing on perhaps Lucid repos.



 For personal use?  Maybe different factors.

 Nexenta the company of course will be contributing to OpenIndiana and
 Illumos...


Now that is news to me. I know that Garrett would be willing to spare a 
man IF the OpenIndiana guys start using illumos as their base for the 
next release...
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Thursday, April 14, 2011 11:30 PM, Les Mikesell wrote:
 On 4/14/2011 7:32 AM, Christopher Chan wrote:

 HAHAHAAAAHA

 The XFS codebase is the biggest pile of mess in the Linux kernel and you
 expect it to be not run into mysterious problems? Remember, XFS was
 PORTED over to Linux. It is not a 'native' thing to Linux.

 Well yeah, but the way I remember it, SGI was using it for real work
 like video editing and storing zillions of files back when Linux was a
 toy with a 2 gig file size limit and linear directory scans as the only
 option.   If you mean that the Linux side had a not-invented-here
 attitude about it and did the port badly you might be right...


No, the XFS guys had to work around the differences between the Linux vm 
and IRIX's and that eventually led to what we have today - a big messy 
pile of code. It would be no surprise for there to be stuff that get 
triggered imho.

I am not saying that XFS itself is bad. Just that the implementation on 
Linux was not quite the same quality as it is on IRIX.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Xorg

2011-04-15 Thread Michel Donais
For a few days I can't boot a server in graphical mode.
The screen goes black and a have a CUI login.
I can login a user and at this point 'startx' get the graphical interface up


in /etc/inittab/
x: 5:respawn: /etc.X11/prefdm -nodaemon
and xdmcp is alivre

On this server we have 12 stations drived by LTSP as a terminal server. We can 
boot the stations to a point where the X graphical interface doesn't come up 
and the terminal stays with a grey screen with a big X in the center.

Do somebody can drive me to a solution?

---
Michel Donais___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread Drew
 - Stick to APC

Five years ago I would have said that. Having worked with Liebert's
GXT2  GXT3units now for the last few years, I'm not so sure I'd want
to go back to APC. For us the biggest bonus of Liebert was we got true
online (double conversion) UPS kit at the same price point as APC's
Line Interactive Smart UPS family.


-- 
Drew

Nothing in life is to be feared. It is only to be understood.
--Marie Curie
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
 On 04/14/11 7:44 AM, Christopher Chan wrote:
 Now, if OpenIndiana resists using illumos...

 openindiana is under the Illumos project umbrella.  They aren't going to
 use anything else.

Eh? I was under the impression that they are separate and that Garrett 
Damore was rather unhappy with the initial direction of OpenIndiana in 
not preparing for an illumos release. 148 is still not illumos as far as 
I know.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] cents 5.6 ..... futur

2011-04-15 Thread Michel Donais
To pass from Centos 5.5 to 5.6 it was easy as an upgrade.

Will it be the same from 5.6 to 6.0 or a full install will be better.

---
Michel Donais___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Cal Webster
On Thu, 2011-04-14 at 13:28 +0200, Simon Matter wrote:
  On 4/14/2011 6:47 AM, Johnny Hughes wrote:
 
  Is it really true that the time is working perfectly with one of the
  other kernels (the older ones)?
 
 
 
 
  Johnny,
 
 Yes, As long as I run the older 5.5 kernel my time is perfect.
  All clients can get from this machine with no issues. As soon as I run
  new kernel, or Plus kernel for that matter. The time goes downhill.
  Uphill actually
 
   To answer the previous question I do have the HW clock set to utc,
  Everything is stock from initial install of the package.
 
 Did you check dmesg which timer is being used (I think it can also be seen
 somewhere in /proc but I don't remember). If it's hpet, you could try to
 disable it. That was for i686: 'hpet=disable' and for x86_64: 'nohpet',
 don't know how it is with current kernels.
 
 Simon

Forgive me if I've missed a later post but it looked like this thread
was stagnant...

You may have something here Simon. I was thinking about your suggestion
that it could be a timer issue. I'm wondering if the default clocksource
or some related timer kernel parameter has been changed between
2.6.18-194.17.4.el5 (5.5) and 2.6.18-238.5.1.el5 (5.6). 

Timer related issues could very well account for this large,
inconsistent NTP drift as well as Florin Andrei's bizarre tar, scp,
and NTP issues in the [CentOS] bizarre system slowness thread. System
interrupts are based on the clocksource chosen by (or configured in) the
kernel. Any service or facility that uses these interrupts could be
experiencing problems.

Can anyone on the list confirm whether or not timer related kernel
parameters have changed in 5.6? I don't have source handy and I'm going
out the door in minutes.

Reading up on kernel timer options, I came across these articles.

# Discusses mis-detected timer frequency
9.2.4.2.7. Kernel 2.6 Mis-Detecting CPU TSC Frequency
http://support.ntp.org/bin/view/Support/KnownOsIssues#Section_9.2.4.2.7.

# Describes ntpd instability from some time sources
# Includes data and graphs from detailed study
http://www.ep.ph.bham.ac.uk/general/support/adjtimex.html


I checked clock sources on a few systems under my control to see what
came up. None are experiencing this problem. The CentOS and FC12
machines are isolated from the Internet while the FC14 laptop connects.
My sample CentOS 5.5  5.6 systems are different hardware platforms. The
5.6 box doesn't have the hpet timer available so it may just not be
susceptible to this problem. I'll be updating the 5.5 sample to 5.6
tomorrow which does have hpet available so I should know something more
then.

# Used these to get available and current clocksource:
cat /sys/devices/system/clocksource/clocksource0/available_clocksource
cat /sys/devices/system/clocksource/clocksource0/current_clocksource 

# CentOS 5.5:
Available: acpi_pm jiffies hpet tsc pit
Current: tsc

# CentOS 5.6:
Available: acpi_pm jiffies tsc pit
Current: tsc

# Fedora 12: 
Available: tsc hpet acpi_pm
Current: tsc

# Fedora 14: Using hpet
Available: hpet acpi_pm
Current: hpet








___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Eero Volotinen
2011/4/15 Michel Donais don...@telupton.com:
 To pass from Centos 5.5 to 5.6 it was easy as an upgrade.

 Will it be the same from 5.6 to 6.0 or a full install will be better.

well, usually upgrading major version is not easy or preferred method.
use full install instead.

--
Eero
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Barry Brimer
 Will it be the same from 5.6 to 6.0 or a full install will be better.

Full installs are always recommended between major versions.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread John R Pierce
On 04/14/11 5:43 PM, Christopher Chan wrote:
 On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
 On 04/14/11 7:44 AM, Christopher Chan wrote:
 Now, if OpenIndiana resists using illumos...
 openindiana is under the Illumos project umbrella.  They aren't going to
 use anything else.
 Eh? I was under the impression that they are separate and that Garrett
 Damore was rather unhappy with the initial direction of OpenIndiana in
 not preparing for an illumos release. 148 is still not illumos as far as
 I know.

afaik, both are still using pretty much the last opensolaris kernel with 
minor changes


I was going on this, which says OpenIndiana is a member of the Illumos 
Foundation, that Illumos was providing the core/kernel, and OpenIndiana 
is integrating it into a complete system aka distribution
http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F

They go onto say they are waiting for Illumos to mature before they 
integrate it.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread admin lewis
2011/4/14 John R Pierce pie...@hogranch.com:
 On 04/14/11 9:06 AM, admin lewis wrote:
 Hi
 I have a Dell PowerEdge T310 *tower* server.. I have to buy an ups by
 apc... anyone could help me giving an hint ?
 a simple smart ups 1000 could be enough ?



 apc smartups or eaton powerware woudl be my choices.    1000VA should be
 fine.

 avoid consumer UPS's like apc backups, they are junk.


 how long do you need the system to stay powered when the power fails?
 just long enough to shutdown?  or do you need it to stay up for some
 period of time?



Few minutes... 10 minutes should be enough.. and then shutdown the machine ..
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Ljubomir Ljubojevic
Michel Donais wrote:
 To pass from Centos 5.5 to 5.6 it was easy as an upgrade.
  
 Will it be the same from 5.6 to 6.0 or a full install will be better.
  
 ---
There is so big difference between them (base packages, package and 
system design, dependencies) that full install will be necessary, not 
only recommended. I think upgrade might be even impossible.

Ljubomir
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Ljubomir Ljubojevic
John R Pierce wrote:
 On 04/14/11 5:43 PM, Christopher Chan wrote:
 On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
 On 04/14/11 7:44 AM, Christopher Chan wrote:
 Now, if OpenIndiana resists using illumos...
 openindiana is under the Illumos project umbrella.  They aren't going to
 use anything else.
 Eh? I was under the impression that they are separate and that Garrett
 Damore was rather unhappy with the initial direction of OpenIndiana in
 not preparing for an illumos release. 148 is still not illumos as far as
 I know.
 
 afaik, both are still using pretty much the last opensolaris kernel with 
 minor changes
 
 
 I was going on this, which says OpenIndiana is a member of the Illumos 
 Foundation, that Illumos was providing the core/kernel, and OpenIndiana 
 is integrating it into a complete system aka distribution
 http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F
 
 They go onto say they are waiting for Illumos to mature before they 
 integrate it.

Eham..., CentOS mailinglist maybe to continue in private?

Ljubomir
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] speed-tuning samba?

2011-04-15 Thread Andrzej Szymański
On 2011-04-14 20:16, Les Mikesell wrote:
 One thing in particular that I'd like to make faster is access to a set
 of libraries (boost, etc.) that are in a directory mapped by several
 windows boxes (mostly VM's on different machines)used as build servers.


I usually run samba with defaults, as playing with the settings did not 
change much in my case. However, I found in one of the IBM redbooks 
(http://www.redbooks.ibm.com/redpapers/pdfs/redp4285.pdf on page 131) 
that disabling tcp sack and dsack is recommended on a samba box working 
on a gigabit LAN (when samba host and clients are in the same LAN).

In one case it helped much, in the other it did not change anything, so 
you should try on your own:
sysctl -w net.ipv4.tcp_sack=0
sysctl -w net.ipv4.tcp_dsack=0

Andrzej
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Rudi Ahlers
On Fri, Apr 15, 2011 at 10:40 AM, Ljubomir Ljubojevic off...@plnet.rswrote:

 John R Pierce wrote:
  On 04/14/11 5:43 PM, Christopher Chan wrote:
  On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
  On 04/14/11 7:44 AM, Christopher Chan wrote:
  Now, if OpenIndiana resists using illumos...
  openindiana is under the Illumos project umbrella.  They aren't going
 to
  use anything else.
  Eh? I was under the impression that they are separate and that Garrett
  Damore was rather unhappy with the initial direction of OpenIndiana in
  not preparing for an illumos release. 148 is still not illumos as far as
  I know.
 
  afaik, both are still using pretty much the last opensolaris kernel with
  minor changes
 
 
  I was going on this, which says OpenIndiana is a member of the Illumos
  Foundation, that Illumos was providing the core/kernel, and OpenIndiana
  is integrating it into a complete system aka distribution
 
 http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F
 
  They go onto say they are waiting for Illumos to mature before they
  integrate it.

 Eham..., CentOS mailinglist maybe to continue in private?

 Ljubomir



Eham., many people are learning a lot more from this thread than a lot
of the other threads in the past few days. let them continue, and don't
subscribe to the tread :)


-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS on SSDs...

2011-04-15 Thread John Doe
Thanks to all for the info.

Guess I will either keep CentOS 5 and have to compile my own kernel for the 
discard option; or wait for CentOS 6...

Thx,
JD
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Friday, April 15, 2011 03:59 PM, John R Pierce wrote:
 On 04/14/11 5:43 PM, Christopher Chan wrote:
 On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
 On 04/14/11 7:44 AM, Christopher Chan wrote:
 Now, if OpenIndiana resists using illumos...
 openindiana is under the Illumos project umbrella.  They aren't going to
 use anything else.
 Eh? I was under the impression that they are separate and that Garrett
 Damore was rather unhappy with the initial direction of OpenIndiana in
 not preparing for an illumos release. 148 is still not illumos as far as
 I know.

 afaik, both are still using pretty much the last opensolaris kernel with
 minor changes


or nice big changes from the standpoint of those who were pining for 
openindiana with b134+patches



 I was going on this, which says OpenIndiana is a member of the Illumos
 Foundation, that Illumos was providing the core/kernel, and OpenIndiana
 is integrating it into a complete system aka distribution
 http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F

oh i see.


 They go onto say they are waiting for Illumos to mature before they
 integrate it.

Yes...like getting g11n in. I guess traction is there already. 
OpenIndiana will be moving to illumos so i guess it would be the one to 
use if one wants a sun cc compiled and sun linked distro.

It's going to be interesting to see how all these different projects 
including CentOS play out.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Ross Walker
On Apr 15, 2011, at 4:48 AM, Rudi Ahlers r...@softdux.com wrote:

 
 
 On Fri, Apr 15, 2011 at 10:40 AM, Ljubomir Ljubojevic off...@plnet.rs wrote:
 John R Pierce wrote:
  On 04/14/11 5:43 PM, Christopher Chan wrote:
  On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
  On 04/14/11 7:44 AM, Christopher Chan wrote:
  Now, if OpenIndiana resists using illumos...
  openindiana is under the Illumos project umbrella.  They aren't going to
  use anything else.
  Eh? I was under the impression that they are separate and that Garrett
  Damore was rather unhappy with the initial direction of OpenIndiana in
  not preparing for an illumos release. 148 is still not illumos as far as
  I know.
 
  afaik, both are still using pretty much the last opensolaris kernel with
  minor changes
 
 
  I was going on this, which says OpenIndiana is a member of the Illumos
  Foundation, that Illumos was providing the core/kernel, and OpenIndiana
  is integrating it into a complete system aka distribution
  http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F
 
  They go onto say they are waiting for Illumos to mature before they
  integrate it.
 
 Eham..., CentOS mailinglist maybe to continue in private?
 
 Ljubomir
 
 
 Eham., many people are learning a lot more from this thread than a lot of 
 the other threads in the past few days. let them continue, and don't 
 subscribe to the tread :)

I agree with both assessments, but since this is a CentOS list and this thread 
has now twisted into ZFS advocacy I must say as well, continue off list.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Johnny Hughes
On 04/14/2011 06:23 AM, Mailing List wrote:
 On 4/14/2011 6:47 AM, Johnny Hughes wrote:

 Is it really true that the time is working perfectly with one of the
 other kernels (the older ones)?



 
Johnny,
 
   Yes, As long as I run the older 5.5 kernel my time is perfect. All
 clients can get from this machine with no issues. As soon as I run new
 kernel, or Plus kernel for that matter. The time goes downhill. Uphill
 actually
   
 To answer the previous question I do have the HW clock set to utc,
 Everything is stock from initial install of the package.
 
 Brian.

I do not see anything from Dell that is a model C151.

I also do not see anything in the RH bugzilla that is problematic for
older AMD processors and the clock, unless running KVM type virtual
machines.

Is this a VM or regular install?

If this a real machine, do you have the latest BIOS from Dell?

Do you have any special kernel options in grub?



signature.asc
Description: OpenPGP digital signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] php53 and MSSQL

2011-04-15 Thread John Beranek
[Reposted now I've joined the list, so I hopefully don't get moderated out]

Hi,

I've upgraded lots of machines to 5.6 (thanks!) and there was one
particular machine that I'd also like to upgrade to PHP 5.3.
Unfortunately it seems I can't.

On the machine I have php-mssql installed, and it appears that there is
no php53-mssql.

php-mssql is built from the php-extras SRPM, so is there going to be a
php53-extras SRPM?

I've checked upstream, and they also don't have a php53-mssql package,
so if there _were_ to be solved it'd have to be in the 'Extras'
repository I guess...

Cheers,

John.

-- 
John Beranek To generalise is to be an idiot.
http://redux.org.uk/ -- William Blake

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Benjamin Franz
On 04/14/2011 09:00 PM, Christopher Chan wrote:

 Wanna try that again with 64MB of cache only and tell us whether there
 is a difference in performance?

 There is a reason why 3ware 85xx cards were complete rubbish when used
 for raid5 and which led to the 95xx/96xx series.
 _

I don't happen to have any systems I can test with the 1.5TB drives 
without controller cache right now, but I have a system with some old 
500GB drives  (which are about half as fast as the 1.5TB drives in 
individual sustained I/O throughput) attached directly to onboard SATA 
ports in a 8 x RAID6 with *no* controller cache at all. The machine has 
16GB of RAM and bonnie++ therefore used 32GB of data for the test.

Version  1.96   --Sequential Output-- --Sequential Input- 
--Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  
/sec %CP
pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26 
590.5  11
Latency 24190us1244ms1580ms   60411us   69901us   
42586us
Version  1.96   --Sequential Create-- Random 
Create
pbox3   -Create-- --Read--- -Delete-- -Create-- --Read--- 
-Delete--
   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  
/sec %CP
  16 10910  31 + +++ + +++ 29293  80 + +++ 
+ +++
Latency   775us 610us 979us 740us 370us 
380us

Given that the underlaying drives are effectively something like half as 
fast as the drives in the other test, the results are quite comparable.

Cache doesn't make a lot of difference when you quickly write a lot more 
data than the cache can hold. The limiting factor becomes the slowest 
component - usually the drives themselves. Cache isn't magic performance 
pixie dust. It helps in certain use cases and is nearly irrelevant in 
others.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and eacclerator

2011-04-15 Thread Geoff Galitz
 I uploaded the spec here:
 http://ubliga.de/php-eaccelerator.spec
 
 It's adjusted for RHEL/Centos 5.6 so that it works with stock php53 
 packages - no need to pull in packages from other repos.
 


Thanks!

 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] php53 and mcrypt

2011-04-15 Thread Geoff Galitz

More PHP fun!

I can see in the spec files that php-mcrypt support was removed by Redhat.  I 
tried to find out why but I don't have sufficient access to redhat bugzilla.  I 
am wondering if it is actually necessary as I have also run across a post or 
two that indicates applications that rely on mcrypt still work with the new 
php53.

Perhaps mcrypt was superceded by another module or PHP core code?

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread Howard Fleming
Some of the newer HP servers are very picky about power from UPS's, from 
what I have read.

I have used several Best Ferrups UPSs over the years, other than one 
that toasted it's transformer, have never had any trouble out of them 
(just replace the battery every 3 to 4 years).

They are picky about their input power, do not run not connect them to 
an auto regulating transformer (not the proper term?), or on the output 
of other UPSs, it can cause interesting problems

Howard

On 4/14/2011 15:33, Lamar Owen wrote:
 On Thursday, April 14, 2011 02:55:51 PM John R Pierce wrote:
 http://powerquality.eaton.com/Products-services/Backup-Power-UPS/5125.aspx
 or similar for this application.   I'd take one of those up versus the
 same size APC SmartUps any day.

 We have a 5KVA Best Ferrups here that has never worked correctly :-)  But 
 I've seen my share of toasted APC's, too.

 Currently we run older APC SmartUPS (pure sine) for the workstation stuff and 
 Symmetras in the Data Centers.  Looking to put in a Toshiba or similar 500KVA 
 in the secondary Data Center later in the year.

 BTW, another thing the 'good' UPS's do, more important than 'pure
 sinusoidal output' for computer purposes*, is buck/boost voltage regulation.

 Yes.

 * if you're running audio gear off a UPS, you definitely want the
 sinusoidal output, but thats another market entirely.

 Or old 3Com Corebuilder/CellPlex 7000 gear, which shuts down with anything 
 but pure sinewave.
 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and MSSQL

2011-04-15 Thread John Beranek
On 15/04/11 12:23, John Beranek wrote:
 [Reposted now I've joined the list, so I hopefully don't get moderated out]
 
 Hi,
 
 I've upgraded lots of machines to 5.6 (thanks!) and there was one
 particular machine that I'd also like to upgrade to PHP 5.3.
 Unfortunately it seems I can't.
 
 On the machine I have php-mssql installed, and it appears that there is
 no php53-mssql.

I was going to see if I could rebuild the php53 SRPM support with MSSQL
support, until I found that the SRPMs still aren't available on the
CentOS mirrors yet. Downloading the upstream RPM now, will see how that
goes...

John.

-- 
John Beranek To generalise is to be an idiot.
http://redux.org.uk/ -- William Blake



smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and mcrypt

2011-04-15 Thread Rainer Traut
Am 15.04.2011 13:32, schrieb Geoff Galitz:
 More PHP fun!
 I can see in the spec files that php-mcrypt support was removed by
 Redhat. I tried to find out why but I don't have sufficient access to
 redhat bugzilla. I am wondering if it is actually necessary as I have
 also run across a post or two that indicates applications that rely on
 mcrypt still work with the new php53.
 Perhaps mcrypt was superceded by another module or PHP core code?

Yeah, I had the same problem with missing php_mcrypt. ;)
I did a full rebuild of php53 with patched spec so that it produces 
php53_mcrypt but that is not very elegant.
The more elegant way to do it is to make an rpm for only the missing 
modules like EPEL's php-extras.
So I'm interested in this, too.

Rainer
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and mcrypt

2011-04-15 Thread Phil Schaffner
Rainer Traut wrote on 04/15/2011 07:55 AM:
...
 Yeah, I had the same problem with missing php_mcrypt. ;)
 I did a full rebuild of php53 with patched spec so that it produces
 php53_mcrypt but that is not very elegant.
 The more elegant way to do it is to make an rpm for only the missing
 modules like EPEL's php-extras.
 So I'm interested in this, too.

Another possibility is using what IUS has already done and installing 
php53u packages.  See the following CentOS forum thread for details:
https://www.centos.org/modules/newbb/viewtopic.php?viewmode=flattopic_id=30881forum=38

Phil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and MSSQL

2011-04-15 Thread Phil Schaffner
John Beranek wrote on 04/15/2011 07:45 AM:
 On 15/04/11 12:23, John Beranek wrote:
 [Reposted now I've joined the list, so I hopefully don't get moderated out]

 Hi,

 I've upgraded lots of machines to 5.6 (thanks!) and there was one
 particular machine that I'd also like to upgrade to PHP 5.3.
 Unfortunately it seems I can't.

 On the machine I have php-mssql installed, and it appears that there is
 no php53-mssql.

 I was going to see if I could rebuild the php53 SRPM support with MSSQL
 support, until I found that the SRPMs still aren't available on the
 CentOS mirrors yet. Downloading the upstream RPM now, will see how that
 goes...


I sound like a shill for IUS this morning - not the case I assure you - 
but they have php53u-mssql-5.3.6-1.ius.el5

Probably will not work unless you uninstall php or php53 and install 
their whole set.

Phil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Peter Kjellström
On Thursday, April 14, 2011 05:26:41 PM Ross Walker wrote:
 2011/4/14 Peter Kjellström c...@nsc.liu.se:
...
  While I do concede the obvious point regarding rebuild time (raid6 takes
  from long to very long to rebuild) I'd like to point out:
  
   * If you do the math for a 12 drive raid10 vs raid6 then (using actual
  data from ~500 1T drives on HP cciss controllers during two years)
  raid10 is ~3x more likely to cause hard data loss than raid6.
  
   * mtbf is not everything there's also the thing called unrecoverable
  read errors. If you hit one while rebuilding your raid10 you're toast
  while in the raid6 case you'll use your 2nd parity and continue the
  rebuild.
 
 You mean if the other side of the mirror fails while rebuilding it.

No, the drive (unrecoverably) failing to read a sector is not the same thing 
as a drive failure. Drive failure frequency expressed in mtbf is around 1M 
hours (even though including predictive fail we see more like 250K hours). 
Unrecoverable read error rate (per sector) was quite recently on the order of 
1x to 10x of the drive size (a drive I looked up now was spec'ed alot higher 
at ~1000x drive size). If we assume a raid10 rebuild time of 12h and an 
unrecoverable read error once every 10x of drive size then the effective mean 
time between read error is 120h (two to ten thousand times worse than the 
drive mtbf). Admittedly these numbers are hard to get and equally hard to 
trust (or double check).

What it all comes down to is that raid10 (assuming just double- not tripple 
copy) stores your data with one extra copy/parity and in a single drive 
failure scenario you have zero extra data left (on that part of the array). 
That is, you depend on each and every bit of that (meaning the degraded part) 
data being correctly read. This means you very much want both:

 1) Very fast rebuilds (= you need hot-spare)
 2) An unrecoverable read error rate much larger than your drive size

or as you suggest below:

 3) Tripple copy

 Yes this is true, of course if this happens with RAID6 it will rebuild
 from parity IF there is a second hotspare available,

This is wrong, hot-spares are not that necessary when using raid6. This has to 
do with the fact that rebuild times (time from you start being vulnerable to 
whatever rebuild completes) are already long. An added 12h for a tech to swap 
in the spare only marginally increases your risks.

 cause remember
 the first failure wasn't cleared before the second failure occurred.
 Now your RAID6 is in severe degraded state, one more failure before
 either of these disks is rebuilt will mean toast for the array.

All of this was taken into account in my original example above. In the end 
(with my data) raid10 was around 3x more likely to cause ultimate data loss 
than raid6.

 Now
 the performance of the array is practically unusable and the load on
 the disks is high as it does a full recalculation rebuild, and if they
 are large it will be high for a very long time, now if any other disk
 in the very large RAID6 array is near failure, or has a bad sector,
 this taxing load could very well push it over the edge

In my example a 12 drive raid6 rebuild takes 6-7 days this works out to  5 
MB/s seq read per drive. This added load is not very noticeable in our 
environment (taking into account normal patrol reads and user data traffic).

Either way, the general problem of [rebuild stress] pushing drives over the 
edge is a larger threat to raid10 than raid6 (it being fatal in the first 
case...).

 and the risk of
 such an event occurring increases with the size of the array and the
 size of the disk surface.
 
 I think this is where the mdraid raid10 shines because it can have 3
 copies (or more) of the data instead of just two,

I think we've now moved into what most people would call unreasonable. Let's 
see what we have for a 12 drive box (quite common 2U size):

 raid6: 12x on raid6 no hot spare (see argument above) = 10 data drives
 raid10: 11x tripple store on raid10 one spare = 3.66 data drives

or (if your raid's not odd-drive capable):

 raid10: 9x tripple store on raid10 one to three spares = 3 data drives

(ok, yes you could get 4 data drives out of it if you skipped hot-spare)

That is almost a 2.7x-3.3x diff! My users sure care if their X $ results in 
1/3 the space (or cost = 3x for the same space if you prefer).

On top of this most raid implementations for raid10 lacks tripple copy 
functionality.

Also note that raid10 that allows for odd number of drives is more vulnerable 
to 2nd drive failures resulting in an even larger than 3x improvement using 
raid6 (vs double copy odd drive handling raid10).

/Peter

 of course a three
 times (or more) the cost. It also allows for uneven number of disks as
 it just saves copies on different spindles rather then mirrors. This
 I think provides the best protection against failure and the best
 performance, but at the worst cost, but with 2TB and 4TB disks coming
 out
...



Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
 On 04/14/2011 09:00 PM, Christopher Chan wrote:

 Wanna try that again with 64MB of cache only and tell us whether there
 is a difference in performance?

 There is a reason why 3ware 85xx cards were complete rubbish when used
 for raid5 and which led to the 95xx/96xx series.
 _

 I don't happen to have any systems I can test with the 1.5TB drives
 without controller cache right now, but I have a system with some old
 500GB drives  (which are about half as fast as the 1.5TB drives in
 individual sustained I/O throughput) attached directly to onboard SATA
 ports in a 8 x RAID6 with *no* controller cache at all. The machine has
 16GB of RAM and bonnie++ therefore used 32GB of data for the test.

 Version  1.96   --Sequential Output-- --Sequential Input-
 --Random-
 Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
 --Seeks--
 MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
 /sec %CP
 pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
 590.5  11
 Latency 24190us1244ms1580ms   60411us   69901us
 42586us
 Version  1.96   --Sequential Create-- Random
 Create
 pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
 -Delete--
 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 /sec %CP
16 10910  31 + +++ + +++ 29293  80 + +++
 + +++
 Latency   775us 610us 979us 740us 370us
 380us

 Given that the underlaying drives are effectively something like half as
 fast as the drives in the other test, the results are quite comparable.

Woohoo, next we will be seeing md raid6 also giving comparable results 
if that is the case. I am not the only person on this list that thinks 
cache is king for raid5/6 on hardware raid boards and the using hardware 
raid + bbu cache for better performance one of the two reasons why we 
don't do md raid5/6.



 Cache doesn't make a lot of difference when you quickly write a lot more
 data than the cache can hold. The limiting factor becomes the slowest
 component - usually the drives themselves. Cache isn't magic performance
 pixie dust. It helps in certain use cases and is nearly irrelevant in
 others.


Yeah, you are right - but cache is primarily to buffer the writes for 
performance. Why else go through the expense of getting bbu cache? So 
what happens when you tweak bonnie a bit?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Rudi Ahlers
On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan 
christopher.c...@bradbury.edu.hk wrote:

 On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
  On 04/14/2011 09:00 PM, Christopher Chan wrote:
 
  Wanna try that again with 64MB of cache only and tell us whether there
  is a difference in performance?
 
  There is a reason why 3ware 85xx cards were complete rubbish when used
  for raid5 and which led to the 95xx/96xx series.
  _
 
  I don't happen to have any systems I can test with the 1.5TB drives
  without controller cache right now, but I have a system with some old
  500GB drives  (which are about half as fast as the 1.5TB drives in
  individual sustained I/O throughput) attached directly to onboard SATA
  ports in a 8 x RAID6 with *no* controller cache at all. The machine has
  16GB of RAM and bonnie++ therefore used 32GB of data for the test.
 
  Version  1.96   --Sequential Output-- --Sequential Input-
  --Random-
  Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
  --Seeks--
  MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
  /sec %CP
  pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
  590.5  11
  Latency 24190us1244ms1580ms   60411us   69901us
  42586us
  Version  1.96   --Sequential Create-- Random
  Create
  pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
  -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
  /sec %CP
 16 10910  31 + +++ + +++ 29293  80 + +++
  + +++
  Latency   775us 610us 979us 740us 370us
  380us
 
  Given that the underlaying drives are effectively something like half as
  fast as the drives in the other test, the results are quite comparable.

 Woohoo, next we will be seeing md raid6 also giving comparable results
 if that is the case. I am not the only person on this list that thinks
 cache is king for raid5/6 on hardware raid boards and the using hardware
 raid + bbu cache for better performance one of the two reasons why we
 don't do md raid5/6.


 
  Cache doesn't make a lot of difference when you quickly write a lot more
  data than the cache can hold. The limiting factor becomes the slowest
  component - usually the drives themselves. Cache isn't magic performance
  pixie dust. It helps in certain use cases and is nearly irrelevant in
  others.
 

 Yeah, you are right - but cache is primarily to buffer the writes for
 performance. Why else go through the expense of getting bbu cache? So
 what happens when you tweak bonnie a bit?
 ___



As matter of interest, does anyone know how to use an SSD drive for cach
purposes on Linux software RAID  drives? ZFS has this feature and it makes a
helluva difference to a storage server's performance.



-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and MSSQL

2011-04-15 Thread John Beranek
On 15/04/11 13:49, Phil Schaffner wrote:
 John Beranek wrote on 04/15/2011 07:45 AM:
 On 15/04/11 12:23, John Beranek wrote:
 [Reposted now I've joined the list, so I hopefully don't get moderated out]

 Hi,

 I've upgraded lots of machines to 5.6 (thanks!) and there was one
 particular machine that I'd also like to upgrade to PHP 5.3.
 Unfortunately it seems I can't.

 On the machine I have php-mssql installed, and it appears that there is
 no php53-mssql.

 I was going to see if I could rebuild the php53 SRPM support with MSSQL
 support, until I found that the SRPMs still aren't available on the
 CentOS mirrors yet. Downloading the upstream RPM now, will see how that
 goes...
 
 
 I sound like a shill for IUS this morning - not the case I assure you - 
 but they have php53u-mssql-5.3.6-1.ius.el5

Well, I've now rebuilt the RHEL SRPM with mssql support. It's now built
in the openSUSE Build Service at:

https://build.opensuse.org/package/show?package=php53project=home%3Ajohnberanek%3Aphp53_centos

Not ideal in that it's the while php53 SRPM, and additionally because
OBS is currently building with CentOS 5.5 instead of 5.6. The latter
issue has brought me to raise a bug in the OBS Bugzilla:

https://bugzilla.novell.com/show_bug.cgi?id=687848
Update CentOS build to 5.6

Installed my built PHP 5.3 RPMs on the machine I wanted them on -
painful! Why do you need to remove the PHP 5.1 RPMs before you can
install the 'php53' ones, surely the php53 RPMs could have had
Deprecated lines!?

John.

-- 
John Beranek To generalise is to be an idiot.
http://redux.org.uk/ -- William Blake



smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Jerry Franz
On 04/15/2011 06:05 AM, Christopher Chan wrote:

 Woohoo, next we will be seeing md raid6 also giving comparable results
 if that is the case. I am not the only person on this list that thinks
 cache is king for raid5/6 on hardware raid boards and the using hardware
 raid + bbu cache for better performance one of the two reasons why we
 don't do md raid5/6.



That *is* md RAID6. Sorry I didn't make that clear. I don't use anyone's 
hardware RAID6 right now because I haven't found a board so far that was 
as fast as using md. I get better performance from even a BBU backed 95X 
series 3ware board by using it to serve the drives as JBOD and then 
using md to do the actual raid.

 Yeah, you are right - but cache is primarily to buffer the writes for
 performance. Why else go through the expense of getting bbu cache? So
 what happens when you tweak bonnie a bit?

For smaller writes. When writes *do* fit in the cache you get a big 
bump. As I said: Helps some cases, not all cases. BBU backed cache helps 
if you have lots of small writes. Not so much if you are writing 
gigabytes of stuff more sequentially.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
Hello,

Earlier this week I installed a test server with CentOS 5.6 with 
Virtualization enabled during the installer. Today I installed another 
server using the same method (they are identical servers). I just did a 
yum update and I found something curious. Both servers have a different 
kernel. Server 1 is at 9.1 version and server 2 at 5.1. How can this be? 
How to I get the latest version on server 2? If I run yum update there 
are none available.

If I input xm info I get this one server 1:

host   : server1
release: 2.6.18-238.9.1.el5xen
version: #1 SMP Tue Apr 12 18:53:56 EDT 2011
machine: x86_64
nr_cpus: 4
nr_nodes   : 1
sockets_per_node   : 1
cores_per_socket   : 4
threads_per_core   : 1
cpu_mhz: 2400
hw_caps: 
bfebfbff:20100800::0940:e3bd::0001
total_memory   : 4095
free_memory: 383
node_to_cpu: node0:0-3
xen_major  : 3
xen_minor  : 1
xen_extra  : .2-238.9.1.el5
xen_caps   : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_pagesize   : 4096
platform_params: virt_start=0x8000
xen_changeset  : unavailable
cc_compiler: gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
cc_compile_by  : mockbuild
cc_compile_domain  : centos.org
cc_compile_date: Tue Apr 12 18:01:03 EDT 2011
xend_config_format : 2

And on server 2 it is this:

host   : server2
release: 2.6.18-238.5.1.el5xen
version: #1 SMP Fri Apr 1 19:35:13 EDT 2011
machine: x86_64
nr_cpus: 4
nr_nodes   : 1
sockets_per_node   : 1
cores_per_socket   : 4
threads_per_core   : 1
cpu_mhz: 2400
hw_caps: 
bfebfbff:20100800::0940:e3bd::0001
total_memory   : 4095
free_memory: 383
node_to_cpu: node0:0-3
xen_major  : 3
xen_minor  : 1
xen_extra  : .2-238.5.1.el5
xen_caps   : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_pagesize   : 4096
platform_params: virt_start=0x8000
xen_changeset  : unavailable
cc_compiler: gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
cc_compile_by  : mockbuild
cc_compile_domain  : centos.org
cc_compile_date: Fri Apr  1 18:30:53 EDT 2011
xend_config_format : 2

-- 
Met vriendelijke groet / With kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
Hello Cal,

Thank you for your reply.

 It's possible that your #2 server has not rebooted or had problems with
 the latest kernel or just has the default set to something other than
 0 in grub.conf.

I did a reboot and checked the grub.conf. Should have mentioned that.

 What's the output of:

 egrep 'default|title' /etc/grub.conf

# egrep 'default|title' /etc/grub.conf
default=0
title CentOS (2.6.18-238.5.1.el5xen)
title CentOS (2.6.18-238.el5xen)

 yum list kernel | grep kernel

yum list kernel | grep kernel
kernel.x86_64 2.6.18-238.5.1.el5 
  updates

Also if I do yum info kernel-xen I get this on server 1:

Name   : kernel-xen
Arch   : x86_64
Version: 2.6.18
Release: 238.9.1.el5
Size   : 95 M
Repo   : installed
Summary: The Linux kernel compiled for Xen VM operations
URL: http://www.kernel.org/
License: GPLv2
Description: This package includes a version of the Linux kernel which
: runs in Xen VM. It works for both priviledged and 
unpriviledged guests.

And this on server 2:

Name   : kernel-xen
Arch   : x86_64
Version: 2.6.18
Release: 238.5.1.el5
Size   : 95 M
Repo   : installed
Summary: The Linux kernel compiled for Xen VM operations
URL: http://www.kernel.org/
License: GPLv2
Description: This package includes a version of the Linux kernel which
: runs in Xen VM. It works for both priviledged and 
unpriviledged guests.

-- 
Kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Cal Webster
On Fri, 2011-04-15 at 16:37 +0200, Hans Vos wrote:
 Hello Cal,
 
 Thank you for your reply.
 
  It's possible that your #2 server has not rebooted or had problems with
  the latest kernel or just has the default set to something other than
  0 in grub.conf.
 
 I did a reboot and checked the grub.conf. Should have mentioned that.
 
  What's the output of:
 
  egrep 'default|title' /etc/grub.conf
 
 # egrep 'default|title' /etc/grub.conf
 default=0
 title CentOS (2.6.18-238.5.1.el5xen)
 title CentOS (2.6.18-238.el5xen)
 
  yum list kernel | grep kernel
 
 yum list kernel | grep kernel
 kernel.x86_64 2.6.18-238.5.1.el5 
   updates

Ryan is right. The mirrors need to sync up. That's most likely the
cause. Still, it's curious why you have two kernels listed in grub.conf
and only one listed from yum. You should also see the 2.6.18-238.el5xen
kernel listed.

./Cal

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
Hello,

 Ryan is right. The mirrors need to sync up. That's most likely the
 cause. Still, it's curious why you have two kernels listed in grub.conf
 and only one listed from yum. You should also see the 2.6.18-238.el5xen
 kernel listed.

Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to 
server 2. Then run yum update and all kinds of errors came flying at me. 
So I just SCP'ed the whole /var/cache/yum directory of server 1 to 
server 2. Ran yum update and there were the updates I was missing 
including the new kernel-xen. I don't know if this was the *proper* way 
of fixing it but it did the job :P.

-- 
Kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Ryan Wagoner
On Fri, Apr 15, 2011 at 11:00 AM, Hans Vos h...@laissezfaire.nl wrote:
 Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to
 server 2. Then run yum update and all kinds of errors came flying at me.
 So I just SCP'ed the whole /var/cache/yum directory of server 1 to
 server 2. Ran yum update and there were the updates I was missing
 including the new kernel-xen. I don't know if this was the *proper* way
 of fixing it but it did the job :P.


Not sure the outcome of copying the yum directory. I would have just
run yum clean all then yum update.

Ryan
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread Howard Fleming


On 4/15/2011 10:48, John R Pierce wrote:
 On 04/15/11 4:38 AM, Howard Fleming wrote:
 I have used several Best Ferrups UPSs over the years, other than one
 that toasted it's transformer, have never had any trouble out of them
 (just replace the battery every 3 to 4 years).

 They are picky about their input power, do not run not connect them to
 an auto regulating transformer (not the proper term?), or on the output
 of other UPSs, it can cause interesting problems...

 I don't think they make any FerrUPS anymore.  Those were based on a
 massive ferroresonant transformer which, yes, is very sensitive to the
 input frequency.  Specifically, they don't like generator power, unless
 it has extremely well regulated frequency output (such as a DC generator
 with a digital sinusoidal converter)

Eaton has the Ferrups line now (still available as far as I know).

I have actually run into the input frequency problem in the past with 
the Ferrups.

I was working at a gas company that for political reasons generated 
their own power inhouse.  Had one Ferrups UPS (of 10?) that was 
complaining about it (kept going online/offline/online  There is a 
parameter in the settings that can be adjusted to allow a greater input 
freq range on the UPS (59.5 - 60.5 hz is the default range, from what I 
remember).  In this case that took care of the problem.

I have 3 1.4kw units at home, no trouble to date running them off of my 
backup generator (Campbell 5k unit).  They are also 18 years old at this 
point and still going... :o).  Running 3 of my CentOS servers at home in 
fact.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Cal Webster
On Fri, 2011-04-15 at 17:00 +0200, Hans Vos wrote:
 Hello,
 
  Ryan is right. The mirrors need to sync up. That's most likely the
  cause. Still, it's curious why you have two kernels listed in grub.conf
  and only one listed from yum. You should also see the 2.6.18-238.el5xen
  kernel listed.
 
 Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to 
 server 2. Then run yum update and all kinds of errors came flying at me. 
 So I just SCP'ed the whole /var/cache/yum directory of server 1 to 
 server 2. Ran yum update and there were the updates I was missing 
 including the new kernel-xen. I don't know if this was the *proper* way 
 of fixing it but it did the job :P.
 
You shouldn't need to copy the timedhosts.txt file the fastestmirrors
yum plugin should recreate it. You might check /var/log/yum.log
or /var/log/messages to make some sense of the errors. I don't see any
harm in using the cache from the other machine, though.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Cal Webster
On Fri, 2011-04-15 at 11:07 -0400, Ryan Wagoner wrote:
 On Fri, Apr 15, 2011 at 11:00 AM, Hans Vos h...@laissezfaire.nl wrote:
  Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to
  server 2. Then run yum update and all kinds of errors came flying at me.
  So I just SCP'ed the whole /var/cache/yum directory of server 1 to
  server 2. Ran yum update and there were the updates I was missing
  including the new kernel-xen. I don't know if this was the *proper* way
  of fixing it but it did the job :P.
 
 
 Not sure the outcome of copying the yum directory. I would have just
 run yum clean all then yum update.
 
 Ryan

+1

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
 Not sure the outcome of copying the yum directory. I would have just
 run yum clean all then yum update.

Ah, thanks, I will put that in my personal Wiki for future reference. 
Noob here and it is a test environment at home :). Thanks for your help.

-- 
Kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Nataraj
On 04/15/2011 04:08 AM, Johnny Hughes wrote:
 On 04/14/2011 06:23 AM, Mailing List wrote:
 On 4/14/2011 6:47 AM, Johnny Hughes wrote:
 Is it really true that the time is working perfectly with one of the
 other kernels (the older ones)?



Johnny,

   Yes, As long as I run the older 5.5 kernel my time is perfect. All
 clients can get from this machine with no issues. As soon as I run new
 kernel, or Plus kernel for that matter. The time goes downhill. Uphill
 actually
   
 To answer the previous question I do have the HW clock set to utc,
 Everything is stock from initial install of the package.

 Brian.
 I do not see anything from Dell that is a model C151.

 I also do not see anything in the RH bugzilla that is problematic for
 older AMD processors and the clock, unless running KVM type virtual
 machines.

 Is this a VM or regular install?

 If this a real machine, do you have the latest BIOS from Dell?

 Do you have any special kernel options in grub?



 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos
It also occured to me to ask if this was running in a VM, but it sounded
like it was running on actual hardware.I once had a vmware VM in
which I had similar misbehavior of the clock.  Eventually I discovered
that the following simple program when run inside the VM would return
immediately instead of delaying for 10 seconds as it should.

#include stdio.h
/* #include sys/select.h */
#include sys/time.h
#include sys/types.h
#include unistd.h


int main() {
fd_set set;
struct timeval timeout;
int filedes = STDIN_FILENO;


FD_ZERO (set);
FD_SET (filedes, set);


timeout.tv_sec = 10;
timeout.tv_usec = 0;

select(FD_SETSIZE, set, NULL, NULL, timeout);

}


I then found out that the ISP had set the host OS for my VM to Ubuntu
when I was running CentOS 5 in the VM.  The cause was that VMware
assumed a tickless kernel for Ubuntu, but not for CentOS 5 and there
were optimizations in the VM emulation that counted on VMware knowing
what timekeeping options where set in the kernel.

Nataraj

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Ross Walker
On Apr 15, 2011, at 9:17 AM, Rudi Ahlers r...@softdux.com wrote:

 
 
 On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan 
 christopher.c...@bradbury.edu.hk wrote:
 On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
  On 04/14/2011 09:00 PM, Christopher Chan wrote:
 
  Wanna try that again with 64MB of cache only and tell us whether there
  is a difference in performance?
 
  There is a reason why 3ware 85xx cards were complete rubbish when used
  for raid5 and which led to the 95xx/96xx series.
  _
 
  I don't happen to have any systems I can test with the 1.5TB drives
  without controller cache right now, but I have a system with some old
  500GB drives  (which are about half as fast as the 1.5TB drives in
  individual sustained I/O throughput) attached directly to onboard SATA
  ports in a 8 x RAID6 with *no* controller cache at all. The machine has
  16GB of RAM and bonnie++ therefore used 32GB of data for the test.
 
  Version  1.96   --Sequential Output-- --Sequential Input-
  --Random-
  Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
  --Seeks--
  MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
  /sec %CP
  pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
  590.5  11
  Latency 24190us1244ms1580ms   60411us   69901us
  42586us
  Version  1.96   --Sequential Create-- Random
  Create
  pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
  -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
  /sec %CP
 16 10910  31 + +++ + +++ 29293  80 + +++
  + +++
  Latency   775us 610us 979us 740us 370us
  380us
 
  Given that the underlaying drives are effectively something like half as
  fast as the drives in the other test, the results are quite comparable.
 
 Woohoo, next we will be seeing md raid6 also giving comparable results
 if that is the case. I am not the only person on this list that thinks
 cache is king for raid5/6 on hardware raid boards and the using hardware
 raid + bbu cache for better performance one of the two reasons why we
 don't do md raid5/6.
 
 
 
  Cache doesn't make a lot of difference when you quickly write a lot more
  data than the cache can hold. The limiting factor becomes the slowest
  component - usually the drives themselves. Cache isn't magic performance
  pixie dust. It helps in certain use cases and is nearly irrelevant in
  others.
 
 
 Yeah, you are right - but cache is primarily to buffer the writes for
 performance. Why else go through the expense of getting bbu cache? So
 what happens when you tweak bonnie a bit?
 ___
 
 
 
 As matter of interest, does anyone know how to use an SSD drive for cach 
 purposes on Linux software RAID  drives? ZFS has this feature and it makes a 
 helluva difference to a storage server's performance. 

Put the file system's log device on it.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Rudi Ahlers
On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker rswwal...@gmail.com wrote:

 On Apr 15, 2011, at 9:17 AM, Rudi Ahlers r...@softdux.com wrote:



 On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan 
 christopher.c...@bradbury.edu.hk
 christopher.c...@bradbury.edu.hk wrote:

 On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
  On 04/14/2011 09:00 PM, Christopher Chan wrote:
 
  Wanna try that again with 64MB of cache only and tell us whether there
  is a difference in performance?
 
  There is a reason why 3ware 85xx cards were complete rubbish when used
  for raid5 and which led to the 95xx/96xx series.
  _
 
  I don't happen to have any systems I can test with the 1.5TB drives
  without controller cache right now, but I have a system with some old
  500GB drives  (which are about half as fast as the 1.5TB drives in
  individual sustained I/O throughput) attached directly to onboard SATA
  ports in a 8 x RAID6 with *no* controller cache at all. The machine has
  16GB of RAM and bonnie++ therefore used 32GB of data for the test.
 
  Version  1.96   --Sequential Output-- --Sequential Input-
  --Random-
  Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
  --Seeks--
  MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
  /sec %CP
  pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
  590.5  11
  Latency 24190us1244ms1580ms   60411us   69901us
  42586us
  Version  1.96   --Sequential Create-- Random
  Create
  pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
  -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
  /sec %CP
 16 10910  31 + +++ + +++ 29293  80 + +++
  + +++
  Latency   775us 610us 979us 740us 370us
  380us
 
  Given that the underlaying drives are effectively something like half as
  fast as the drives in the other test, the results are quite comparable.

 Woohoo, next we will be seeing md raid6 also giving comparable results
 if that is the case. I am not the only person on this list that thinks
 cache is king for raid5/6 on hardware raid boards and the using hardware
 raid + bbu cache for better performance one of the two reasons why we
 don't do md raid5/6.


 
  Cache doesn't make a lot of difference when you quickly write a lot more
  data than the cache can hold. The limiting factor becomes the slowest
  component - usually the drives themselves. Cache isn't magic performance
  pixie dust. It helps in certain use cases and is nearly irrelevant in
  others.
 

 Yeah, you are right - but cache is primarily to buffer the writes for
 performance. Why else go through the expense of getting bbu cache? So
 what happens when you tweak bonnie a bit?
 ___



 As matter of interest, does anyone know how to use an SSD drive for cach
 purposes on Linux software RAID  drives? ZFS has this feature and it makes a
 helluva difference to a storage server's performance.


 Put the file system's log device on it.

 -Ross


 ___




Well, ZFS has a separate ZIL for that purpose, and the ZIL adds extra
protection / redundancy to the whole pool.

But the Cache / L2ARC drive caches all common reads  writes (simply put)
onto SSD to improve overall system performance.

So I was wondering if one could do this with mdraid or even just EXT3 /
EXT4?



-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 5.6 - SRPM's

2011-04-15 Thread Filipe Rosset
On 04/13/2011 07:54 AM, Karanbir Singh wrote:
 
 They are definitely in there, just slow.
 
 - KB

Hi guys,

Still without SRPM's in 5.6/os/SRPMS/

-- 
Filipe
Rio Grande do Sul, Brazil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] cross-platform email client

2011-04-15 Thread Florin Andrei
I'm a Thunderbird user almost since day one, but now I'm looking for 
something else. For whatever reason, it doesn't work well for me - every 
once in a while it becomes non-responsive (UI completely frozen for 
several seconds, CPU usage goes to 100%) and I just can't afford to 
waste time waiting for the email software to start working again.

My main desktop platform is Linux, but I need a client that works the 
same and looks the same on Windows too. Email server is IMAP with a 
pretty hefty account: over a hundred folders, hundreds of thousands of 
messages total (server-side filtering with Sieve). Typically it's a 
remote session, over VPN. So the client better work well, and be 
glitch-free.

The issues with Thunderbird might be related to the size of my IMAP 
account, plus the VPN latency - but frankly, I don't care, the client 
needs to hide all that stuff from me, do the updates or whatever in the 
background, instead of blocking the UI until it's done. Ironically, it 
blocked when I was done with this paragraph and I hit Enter. Sticking it 
to the man one last time, I guess.

Any suggestions? Thanks.

-- 
Florin Andrei
http://florin.myip.org/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread John R Pierce
On 04/15/11 12:07 PM, Florin Andrei wrote:
 I'm a Thunderbird user almost since day one, but now I'm looking for
 something else. For whatever reason, it doesn't work well for me - every
 once in a while it becomes non-responsive (UI completely frozen for
 several seconds, CPU usage goes to 100%) and I just can't afford to
 waste time waiting for the email software to start working again.



I think T-bird gets locked up when its SENDING mail if the server takes 
too long to reply at the early stages of the protocol.  that or DNS 
lookups take too long.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Jeff
On Fri, Apr 15, 2011 at 2:07 PM, Florin Andrei flo...@andrei.myip.org wrote:
 I'm a Thunderbird user almost since day one, but now I'm looking for
 something else. For whatever reason, it doesn't work well for me - every
 once in a while it becomes non-responsive (UI completely frozen for
 several seconds, CPU usage goes to 100%) and I just can't afford to
 waste time waiting for the email software to start working again.

 My main desktop platform is Linux, but I need a client that works the
 same and looks the same on Windows too. Email server is IMAP with a
 pretty hefty account: over a hundred folders, hundreds of thousands of
 messages total (server-side filtering with Sieve). Typically it's a
 remote session, over VPN. So the client better work well, and be
 glitch-free.

 The issues with Thunderbird might be related to the size of my IMAP
 account, plus the VPN latency - but frankly, I don't care, the client
 needs to hide all that stuff from me, do the updates or whatever in the
 background, instead of blocking the UI until it's done. Ironically, it
 blocked when I was done with this paragraph and I hit Enter. Sticking it
 to the man one last time, I guess.

 Any suggestions? Thanks.

By default Thunderbird creates a local cache for IMAP accounts -- for
large accounts, this can be problematic. Have you tried disabling the
local synchronization?

Account Settings - Synch  Storage - Uncheck Keep messages for this
account on this computer

Or at least that's where it is in Windows T-Bird.

--
Jeff
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Scott Robbins
On Fri, Apr 15, 2011 at 02:30:10PM -0500, Jeff wrote:

 On Fri, Apr 15, 2011 at 2:07 PM, Florin Andrei flo...@andrei.myip.org wrote:
  I'm a Thunderbird user almost since day one, but now I'm looking for
  something else. For whatever reason, it doesn't work well for me - every
  once in a while it becomes non-responsive (UI completely frozen for
  several seconds, CPU usage goes to 100%) and I just can't afford to
  waste time waiting for the email software to start working again.

 By default Thunderbird creates a local cache for IMAP accounts -- for
 large accounts, this can be problematic. Have you tried disabling the
 local synchronization?
 
 Account Settings - Synch  Storage - Uncheck Keep messages for this
 account on this computer
 
There is another setting that can apparently cause high CPU usage.
PreferencesAdvancedGeneralAdvanced ConfigurationEnable Global Search
and Indexer (don't have Thunderbird handy, so that path might be
slightly off.)

-- 
Scott Robbins
PGP keyID EB3467D6
( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
gpg --keyserver pgp.mit.edu --recv-keys EB3467D6

Buffy: I can't believe you got into Oxford! 
Willow: It's pretty exciting. 
Oz: That's some deep academia there. 
Buffy: That's where they make Gileses! 
Willow: I know! I can learn, and have scones! 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Florin Andrei
On 04/15/2011 12:28 PM, John R Pierce wrote:

 I think T-bird gets locked up when its SENDING mail if the server takes
 too long to reply at the early stages of the protocol.  that or DNS
 lookups take too long.

At least in my case - no and no.

It freezes randomly but pretty often, no relation to sending emails.

The IMAP and SMTP servers are defined by IP address, not hostname. But 
even if that was the case, a software that blocks the UI completely 
while waiting for something in the background? Sounds like 1999 all over 
again.

-- 
Florin Andrei
http://florin.myip.org/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Florin Andrei
On 04/15/2011 12:30 PM, Jeff wrote:

 By default Thunderbird creates a local cache for IMAP accounts -- for
 large accounts, this can be problematic. Have you tried disabling the
 local synchronization?

 Account Settings -  Synch  Storage -  Uncheck Keep messages for this
 account on this computer

It's unchecked already.

-- 
Florin Andrei
http://florin.myip.org/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread John R Pierce
On 04/15/11 12:45 PM, Florin Andrei wrote:
 On 04/15/2011 12:28 PM, John R Pierce wrote:
 I think T-bird gets locked up when its SENDING mail if the server takes
 too long to reply at the early stages of the protocol.  that or DNS
 lookups take too long.
 At least in my case - no and no.

 It freezes randomly but pretty often, no relation to sending emails.

 The IMAP and SMTP servers are defined by IP address, not hostname. But
 even if that was the case, a software that blocks the UI completely
 while waiting for something in the background? Sounds like 1999 all over
 again.

my local SMTP server is intentionally configured to verify delivery 
addresses before it accepts a mail.  sometimes this causes delays.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Michael Davis
On 4/15/2011 3:46 PM, Florin Andrei wrote:
 On 04/15/2011 12:30 PM, Jeff wrote:
 By default Thunderbird creates a local cache for IMAP accounts -- for
 large accounts, this can be problematic. Have you tried disabling the
 local synchronization?

 Account Settings -   Synch   Storage -   Uncheck Keep messages for this
 account on this computer
 It's unchecked already.


I experienced a similar problem with Thunderbird on Windows. For me, it 
ended up being folder compaction. Changing the settings on compaction 
(Tools/Options/Advanced/Network  Disk Space) reduced the frequency that 
folders are compacted, and thereby my frustration, but did not eliminate 
them. I agree that the UI should not be affected by maintenance functions.

Hope this helps.

Michael Davis
Profician Corporation

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Ross Walker
On Apr 15, 2011, at 12:32 PM, Rudi Ahlers r...@softdux.com wrote:

 
 
 On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker rswwal...@gmail.com wrote:
 On Apr 15, 2011, at 9:17 AM, Rudi Ahlers r...@softdux.com wrote:
 
 
 
 On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan 
 christopher.c...@bradbury.edu.hk wrote:
 On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
  On 04/14/2011 09:00 PM, Christopher Chan wrote:
 
  Wanna try that again with 64MB of cache only and tell us whether there
  is a difference in performance?
 
  There is a reason why 3ware 85xx cards were complete rubbish when used
  for raid5 and which led to the 95xx/96xx series.
  _
 
  I don't happen to have any systems I can test with the 1.5TB drives
  without controller cache right now, but I have a system with some old
  500GB drives  (which are about half as fast as the 1.5TB drives in
  individual sustained I/O throughput) attached directly to onboard SATA
  ports in a 8 x RAID6 with *no* controller cache at all. The machine has
  16GB of RAM and bonnie++ therefore used 32GB of data for the test.
 
  Version  1.96   --Sequential Output-- --Sequential Input-
  --Random-
  Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
  --Seeks--
  MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
  /sec %CP
  pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
  590.5  11
  Latency 24190us1244ms1580ms   60411us   69901us
  42586us
  Version  1.96   --Sequential Create-- Random
  Create
  pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
  -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
  /sec %CP
 16 10910  31 + +++ + +++ 29293  80 + +++
  + +++
  Latency   775us 610us 979us 740us 370us
  380us
 
  Given that the underlaying drives are effectively something like half as
  fast as the drives in the other test, the results are quite comparable.
 
 Woohoo, next we will be seeing md raid6 also giving comparable results
 if that is the case. I am not the only person on this list that thinks
 cache is king for raid5/6 on hardware raid boards and the using hardware
 raid + bbu cache for better performance one of the two reasons why we
 don't do md raid5/6.
 
 
 
  Cache doesn't make a lot of difference when you quickly write a lot more
  data than the cache can hold. The limiting factor becomes the slowest
  component - usually the drives themselves. Cache isn't magic performance
  pixie dust. It helps in certain use cases and is nearly irrelevant in
  others.
 
 
 Yeah, you are right - but cache is primarily to buffer the writes for
 performance. Why else go through the expense of getting bbu cache? So
 what happens when you tweak bonnie a bit?
 ___
 
 
 
 As matter of interest, does anyone know how to use an SSD drive for cach 
 purposes on Linux software RAID  drives? ZFS has this feature and it makes a 
 helluva difference to a storage server's performance. 
 
 Put the file system's log device on it.
 
 -Ross
 
 
 ___
 
 
 
 Well, ZFS has a separate ZIL for that purpose, and the ZIL adds extra 
 protection / redundancy to the whole pool. 
 
 But the Cache / L2ARC drive caches all common reads  writes (simply put) 
 onto SSD to improve overall system performance. 
 
 So I was wondering if one could do this with mdraid or even just EXT3 / EXT4?

Ext3/4 and XFS allow specifying an external log device which if is an SSD can 
speed up writes. All these file systems aggressively use page cache for 
read/write cache. The only thing you don't get is L2ARC type cache, but I heard 
of a dm-cache project that might provide provide that type of cache.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Mailing List

On 4/15/2011 7:08 AM, Johnny Hughes wrote:


I do not see anything from Dell that is a model C151.

I also do not see anything in the RH bugzilla that is problematic for
older AMD processors and the clock, unless running KVM type virtual
machines.

Is this a VM or regular install?

If this a real machine, do you have the latest BIOS from Dell?

Do you have any special kernel options in grub?



Johnny,

Sorry about the wrong system id number here is what it is.

Dell Inspiron C521
Bios Version 1.1.11 (08/07/2007)

 It is not a VM, it is a regular install. I have not made any 
changes to the kernel options. It has been fine with a stock install so 
I never had any need to tweek it.


Thank you.
Brian





smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Mailing List

On 4/15/2011 4:58 PM, Mailing List wrote:

Johnny,

Sorry about the wrong system id number here is what it is.

Dell Inspiron C521
Bios Version 1.1.11 (08/07/2007)

 It is not a VM, it is a regular install. I have not made any 
changes to the kernel options. It has been fine with a stock install 
so I never had any need to tweek it.


Thank you.
Brian


   I would have answered sooner but my ISP ended up in the trash can 
due to the list's spam filters.


  I tried the latest kernel that was just rolled out. 
kernel-2.6.18-238.9.1.el5 and it was a mess also.


Brian.



smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] unrar rpm package

2011-04-15 Thread Kaplan, Andrew H.
Hi there --

I am running a server with the 5.6 64-bit distribution, and I am looking for an
unrar rpm package. 
I have several repositories set up on the server which are the following:

base
updates
extras
centosplus
contrib
c5-testing

All are enabled. When I do a yum search for unrar, nothing comes up. Is there
another repository
that I should add to the list, or is there a particular website that I can go to
get the package?

Thanks.




The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread Frank Cox
On Fri, 15 Apr 2011 17:38:49 -0400
Kaplan, Andrew H. wrote:

 All are enabled. When I do a yum search for unrar, nothing comes up. Is there
 another repository
 that I should add to the list, or is there a particular website that I can go
 to get the package?

rpmfusion has it.


-- 
MELVILLE THEATRE ~ Real D 3D Digital Cinema ~ www.melvilletheatre.com
www.creekfm.com - FIFTY THOUSAND WATTS of POW WOW POWER!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread John R Pierce
On 04/15/11 2:38 PM, Kaplan, Andrew H. wrote:

 Hi there --

 I am running a server with the 5.6 64-bit distribution, and I am 
 looking for an unrar rpm package.
 I have several repositories set up on the server which are the following:

 base
 updates
 extras
 centosplus
 contrib
 c5-testing

 All are enabled. When I do a yum search for unrar, nothing comes up. 
 Is there another repository
 that I should add to the list, or is there a particular website that I 
 can go to get the package?



I see a rar and unrar in rpmforge.

# yum --enablerepo=rpmforge list rar unrar
..
Available Packages
rar.i386
3.8.0-1.el5.rf  rpmforge
unrar.i386  
4.0.7-1.el5.rf  rpmforge


there's probably other ports in places like EPEL but I didn't look there.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread Mailing List

On 4/15/2011 5:50 PM, Frank Cox wrote:

On Fri, 15 Apr 2011 17:38:49 -0400
Kaplan, Andrew H. wrote:


All are enabled. When I do a yum search for unrar, nothing comes up. Is there
another repository
that I should add to the list, or is there a particular website that I can go
to get the package?

rpmforge also..

http://wiki.centos.org/AdditionalResources/Repositories/RPMForge




smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread Scott Robbins
On Fri, Apr 15, 2011 at 02:54:23PM -0700, John R Pierce wrote:
 On 04/15/11 2:38 PM, Kaplan, Andrew H. wrote:
 
  Hi there --
 
  I am running a server with the 5.6 64-bit distribution, and I am 
  looking for an unrar rpm package.
  I have several repositories set up on the server which are the following:

As was mentioned, rpmforge has it.  For what it's worth, p7zip does the
same thing and somewhat more quickly at least in my very rough
benchmarks, e.g. time rar e something.rar vs 7z e something rar.


-- 
Scott Robbins
PGP keyID EB3467D6
( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
gpg --keyserver pgp.mit.edu --recv-keys EB3467D6


Wesley:  Wait for Faith.
 Buffy:  That could be hours.  The girl makes Godot look punctual.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Christopher Chan


 As matter of interest, does anyone know how to use an SSD drive for cach
 purposes on Linux software RAID  drives? ZFS has this feature and it
 makes a helluva difference to a storage server's performance.

You cannot. You can however use one for the external journal of ext3/4 
in full journaling mode for something similar.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Devin Reade
Florin Andrei flo...@andrei.myip.org wrote:

 I'm a Thunderbird user almost since day one, but now I'm looking for 
 something else.

Check out Mulberry.  http://mulberrymail.com/  It hasn't been updated
in a while, but don't let that scare you off. It's a very solid mail
reader for Linux, Mac, and Windows. It does all the usual mail-related
protocols, included crypto, authentication, filtering (server and I
think client side), address books, scheduling, etc.

To put into perspective, my client talks to four different IMAP accounts,
the largest of which has 326 subfolders and 530,000 messages. The only
bug that I seem to run into with the latest version is if the SMTP server
isn't available when you send your first message after starting up, then
the message you sent doesn't get kicked out of the local spool until you
send the 2nd message. (Earlier versions would retry periodically; maybe
there's a config setting somewhere I've not noticed, but it hasn't annoyed
me enough to track it down.)

If you're installing on CentOS you will need, IIRC, one of the 
compat-libc RPMS to be installed.  Use ldd to figure out which one.

Just grab the mulberry client.  Don't bother with the mulberry admin
tool; it's intended for large scale deployments.

Devin

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Michel Donais
 Will it be the same from 5.6 to 6.0 or a full install will be better.

 Full installs are always recommended between major versions.


Thank's all for the advise; but is there any easy way to install a newer 
version while keeping all configuration changes that have been made on a 
previous one as for 'sendmail', 'sshd.conf','firewalls', etc...


---
Michel Donais 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread John R Pierce
On 04/15/11 7:40 PM, Michel Donais wrote:
 Will it be the same from 5.6 to 6.0 or a full install will be better.
 Full installs are always recommended between major versions.

 Thank's all for the advise; but is there any easy way to install a newer
 version while keeping all configuration changes that have been made on a
 previous one as for 'sendmail', 'sshd.conf','firewalls', etc...

have all your configuration under a change management system, with an at 
least semi-automated installation procedure, such as kickstart.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Michel Donais
 have all your configuration under a change management system, with an at 
 least semi-automated installation procedure, such as kickstart.

I nerver think kikstart was I need.
I will check what it is and how it work.

Thank's for the info.

---
Michel Donais


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Gnome Notification Applet

2011-04-15 Thread Ron Blizzard
I tried out Scientific Linux 6 Live to see (basically) what I can
expect with CentOS 6 and was pleased to find that everything looks
pretty familiar and is easily customizable to make it look and feel
like 5.6 -- except for one thing that I also noticed in Ubuntu's
newest beta (my Dad uses Linux Mint). For whatever reason, Gnome has
decided to put the Volume Control and Network Manager in the
Notification Applet. (It's worse with Ubuntu, they've put four applets
there by default.) On my desktop I don't display the Network Manager,
but I like the Volume Control to be there (on the very right beside
the clock). I spent most of my trial time with SL 6 trying to figure
out how to separate these two applets from the Notification Applet --
without success. Is there a configuration file I can change or a
configuration program I can run to customize this?

I realize it's not a huge deal, but it's an irritant. Why does Gnome
want to limit the ability to customize?

Thanks for any pointers.

-- 
RonB -- Using CentOS 5.6
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Devin Reade
John R Pierce pie...@hogranch.com wrote:

 have all your configuration under a change management system, with an at 
 least semi-automated installation procedure, such as kickstart.

Or have the self discipline to keep a text file (or other record) of 
*all* changes you make to a system as root or other role account.
I always keep a running log, complete with dates and who makes the 
change, as /root/`hostname`-mods.  Trivial operations (that any junior
sysadmin would be expected to know) get described. Anything more complex
gets the actual commands entered (minus passwords).

It's extra work, however not only has it saved my bacon a lot over the
years in figuring out, after the fact, what caused something to break
but even more often it has been invaluable in recreating a system or
quickly implementing similar functions on other systems.

Yes, this is a form of a change management system, just with little
overhead.  It is also more suited to server environments where each
one might be slightly different as opposed to (for example) corporate
workstation environments where you can have a large number of homogeneous 
machines.  In that case, there are many other tools more suitable,
with higher setup costs, but the amortized cost is lower.

Devin
-- 
When I was little, my grandfather used to make me stand in a closet for 
five minutes without moving. He said it was elevator practice.
- Stephen Wright

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos