Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Devin Reade
John R Pierce  wrote:

> have all your configuration under a change management system, with an at 
> least semi-automated installation procedure, such as kickstart.

Or have the self discipline to keep a text file (or other record) of 
*all* changes you make to a system as root or other role account.
I always keep a running log, complete with dates and who makes the 
change, as /root/`hostname`-mods.  Trivial operations (that any junior
sysadmin would be expected to know) get described. Anything more complex
gets the actual commands entered (minus passwords).

It's extra work, however not only has it saved my bacon a lot over the
years in figuring out, after the fact, what caused something to break
but even more often it has been invaluable in recreating a system or
quickly implementing similar functions on other systems.

Yes, this is a form of a change management system, just with little
overhead.  It is also more suited to server environments where each
one might be slightly different as opposed to (for example) corporate
workstation environments where you can have a large number of homogeneous 
machines.  In that case, there are many other tools more suitable,
with higher setup costs, but the amortized cost is lower.

Devin
-- 
When I was little, my grandfather used to make me stand in a closet for 
five minutes without moving. He said it was elevator practice.
- Stephen Wright

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Gnome Notification Applet

2011-04-15 Thread Ron Blizzard
I tried out Scientific Linux 6 Live to see (basically) what I can
expect with CentOS 6 and was pleased to find that everything looks
pretty familiar and is easily customizable to make it look and feel
like 5.6 -- except for one thing that I also noticed in Ubuntu's
newest beta (my Dad uses Linux Mint). For whatever reason, Gnome has
decided to put the Volume Control and Network Manager in the
Notification Applet. (It's worse with Ubuntu, they've put four applets
there by default.) On my desktop I don't display the Network Manager,
but I like the Volume Control to be there (on the very right beside
the clock). I spent most of my "trial time" with SL 6 trying to figure
out how to separate these two applets from the Notification Applet --
without success. Is there a configuration file I can change or a
configuration program I can run to customize this?

I realize it's not a huge deal, but it's an irritant. Why does Gnome
want to limit the ability to customize?

Thanks for any pointers.

-- 
RonB -- Using CentOS 5.6
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Michel Donais
> have all your configuration under a change management system, with an at 
> least semi-automated installation procedure, such as kickstart.

I nerver think kikstart was I need.
I will check what it is and how it work.

Thank's for the info.

---
Michel Donais


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread John R Pierce
On 04/15/11 7:40 PM, Michel Donais wrote:
>>> Will it be the same from 5.6 to 6.0 or a full install will be better.
>> Full installs are always recommended between major versions.
>
> Thank's all for the advise; but is there any easy way to install a newer
> version while keeping all configuration changes that have been made on a
> previous one as for 'sendmail', 'sshd.conf','firewalls', etc...

have all your configuration under a change management system, with an at 
least semi-automated installation procedure, such as kickstart.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Michel Donais
>> Will it be the same from 5.6 to 6.0 or a full install will be better.
>
> Full installs are always recommended between major versions.


Thank's all for the advise; but is there any easy way to install a newer 
version while keeping all configuration changes that have been made on a 
previous one as for 'sendmail', 'sshd.conf','firewalls', etc...


---
Michel Donais 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Devin Reade
Florin Andrei  wrote:

> I'm a Thunderbird user almost since day one, but now I'm looking for 
> something else.

Check out Mulberry.    It hasn't been updated
in a while, but don't let that scare you off. It's a very solid mail
reader for Linux, Mac, and Windows. It does all the usual mail-related
protocols, included crypto, authentication, filtering (server and I
think client side), address books, scheduling, etc.

To put into perspective, my client talks to four different IMAP accounts,
the largest of which has 326 subfolders and 530,000 messages. The only
bug that I seem to run into with the latest version is if the SMTP server
isn't available when you send your first message after starting up, then
the message you sent doesn't get kicked out of the local spool until you
send the 2nd message. (Earlier versions would retry periodically; maybe
there's a config setting somewhere I've not noticed, but it hasn't annoyed
me enough to track it down.)

If you're installing on CentOS you will need, IIRC, one of the 
compat-libc RPMS to be installed.  Use ldd to figure out which one.

Just grab the mulberry client.  Don't bother with the mulberry admin
tool; it's intended for large scale deployments.

Devin

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Christopher Chan

>
> As matter of interest, does anyone know how to use an SSD drive for cach
> purposes on Linux software RAID  drives? ZFS has this feature and it
> makes a helluva difference to a storage server's performance.

You cannot. You can however use one for the external journal of ext3/4 
in full journaling mode for something similar.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread Scott Robbins
On Fri, Apr 15, 2011 at 02:54:23PM -0700, John R Pierce wrote:
> On 04/15/11 2:38 PM, Kaplan, Andrew H. wrote:
> >
> > Hi there --
> >
> > I am running a server with the 5.6 64-bit distribution, and I am 
> > looking for an unrar rpm package.
> > I have several repositories set up on the server which are the following:

As was mentioned, rpmforge has it.  For what it's worth, p7zip does the
same thing and somewhat more quickly at least in my very rough
benchmarks, e.g. time rar e something.rar vs 7z e something rar.


-- 
Scott Robbins
PGP keyID EB3467D6
( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
gpg --keyserver pgp.mit.edu --recv-keys EB3467D6


Wesley:  Wait for Faith.
 Buffy:  That could be hours.  The girl makes Godot look punctual.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread Mailing List

On 4/15/2011 5:50 PM, Frank Cox wrote:

On Fri, 15 Apr 2011 17:38:49 -0400
Kaplan, Andrew H. wrote:


All are enabled. When I do a yum search for unrar, nothing comes up. Is there
another repository
that I should add to the list, or is there a particular website that I can go
to get the package?

rpmforge also..

http://wiki.centos.org/AdditionalResources/Repositories/RPMForge




smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread John R Pierce
On 04/15/11 2:38 PM, Kaplan, Andrew H. wrote:
>
> Hi there --
>
> I am running a server with the 5.6 64-bit distribution, and I am 
> looking for an unrar rpm package.
> I have several repositories set up on the server which are the following:
>
> base
> updates
> extras
> centosplus
> contrib
> c5-testing
>
> All are enabled. When I do a yum search for unrar, nothing comes up. 
> Is there another repository
> that I should add to the list, or is there a particular website that I 
> can go to get the package?
>


I see a rar and unrar in rpmforge.

# yum --enablerepo=rpmforge list rar unrar
..
Available Packages
rar.i386
3.8.0-1.el5.rf  rpmforge
unrar.i386  
4.0.7-1.el5.rf  rpmforge


there's probably other ports in places like EPEL but I didn't look there.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] unrar rpm package

2011-04-15 Thread Frank Cox
On Fri, 15 Apr 2011 17:38:49 -0400
Kaplan, Andrew H. wrote:

> All are enabled. When I do a yum search for unrar, nothing comes up. Is there
> another repository
> that I should add to the list, or is there a particular website that I can go
> to get the package?

rpmfusion has it.


-- 
MELVILLE THEATRE ~ Real D 3D Digital Cinema ~ www.melvilletheatre.com
www.creekfm.com - FIFTY THOUSAND WATTS of POW WOW POWER!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] unrar rpm package

2011-04-15 Thread Kaplan, Andrew H.
Hi there --

I am running a server with the 5.6 64-bit distribution, and I am looking for an
unrar rpm package. 
I have several repositories set up on the server which are the following:

base
updates
extras
centosplus
contrib
c5-testing

All are enabled. When I do a yum search for unrar, nothing comes up. Is there
another repository
that I should add to the list, or is there a particular website that I can go to
get the package?

Thanks.




The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Mailing List

On 4/15/2011 4:58 PM, Mailing List wrote:

Johnny,

Sorry about the wrong system id number here is what it is.

Dell Inspiron C521
Bios Version 1.1.11 (08/07/2007)

 It is not a VM, it is a regular install. I have not made any 
changes to the kernel options. It has been fine with a stock install 
so I never had any need to tweek it.


Thank you.
Brian


   I would have answered sooner but my ISP ended up in the trash can 
due to the list's spam filters.


  I tried the latest kernel that was just rolled out. 
kernel-2.6.18-238.9.1.el5 and it was a mess also.


Brian.



smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Mailing List

On 4/15/2011 7:08 AM, Johnny Hughes wrote:


I do not see anything from Dell that is a model C151.

I also do not see anything in the RH bugzilla that is problematic for
older AMD processors and the clock, unless running KVM type virtual
machines.

Is this a VM or regular install?

If this a real machine, do you have the latest BIOS from Dell?

Do you have any special kernel options in grub?



Johnny,

Sorry about the wrong system id number here is what it is.

Dell Inspiron C521
Bios Version 1.1.11 (08/07/2007)

 It is not a VM, it is a regular install. I have not made any 
changes to the kernel options. It has been fine with a stock install so 
I never had any need to tweek it.


Thank you.
Brian





smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Ross Walker
On Apr 15, 2011, at 12:32 PM, Rudi Ahlers  wrote:

> 
> 
> On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker  wrote:
> On Apr 15, 2011, at 9:17 AM, Rudi Ahlers  wrote:
> 
>> 
>> 
>> On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan 
>>  wrote:
>> On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
>> > On 04/14/2011 09:00 PM, Christopher Chan wrote:
>> >>
>> >> Wanna try that again with 64MB of cache only and tell us whether there
>> >> is a difference in performance?
>> >>
>> >> There is a reason why 3ware 85xx cards were complete rubbish when used
>> >> for raid5 and which led to the 95xx/96xx series.
>> >> _
>> >
>> > I don't happen to have any systems I can test with the 1.5TB drives
>> > without controller cache right now, but I have a system with some old
>> > 500GB drives  (which are about half as fast as the 1.5TB drives in
>> > individual sustained I/O throughput) attached directly to onboard SATA
>> > ports in a 8 x RAID6 with *no* controller cache at all. The machine has
>> > 16GB of RAM and bonnie++ therefore used 32GB of data for the test.
>> >
>> > Version  1.96   --Sequential Output-- --Sequential Input-
>> > --Random-
>> > Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> > --Seeks--
>> > MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>> > /sec %CP
>> > pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
>> > 590.5  11
>> > Latency 24190us1244ms1580ms   60411us   69901us
>> > 42586us
>> > Version  1.96   --Sequential Create-- Random
>> > Create
>> > pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
>> > -Delete--
>> > files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>> > /sec %CP
>> >16 10910  31 + +++ + +++ 29293  80 + +++
>> > + +++
>> > Latency   775us 610us 979us 740us 370us
>> > 380us
>> >
>> > Given that the underlaying drives are effectively something like half as
>> > fast as the drives in the other test, the results are quite comparable.
>> 
>> Woohoo, next we will be seeing md raid6 also giving comparable results
>> if that is the case. I am not the only person on this list that thinks
>> cache is king for raid5/6 on hardware raid boards and the using hardware
>> raid + bbu cache for better performance one of the two reasons why we
>> don't do md raid5/6.
>> 
>> 
>> >
>> > Cache doesn't make a lot of difference when you quickly write a lot more
>> > data than the cache can hold. The limiting factor becomes the slowest
>> > component - usually the drives themselves. Cache isn't magic performance
>> > pixie dust. It helps in certain use cases and is nearly irrelevant in
>> > others.
>> >
>> 
>> Yeah, you are right - but cache is primarily to buffer the writes for
>> performance. Why else go through the expense of getting bbu cache? So
>> what happens when you tweak bonnie a bit?
>> ___
>> 
>> 
>> 
>> As matter of interest, does anyone know how to use an SSD drive for cach 
>> purposes on Linux software RAID  drives? ZFS has this feature and it makes a 
>> helluva difference to a storage server's performance. 
> 
> Put the file system's log device on it.
> 
> -Ross
> 
> 
> ___
> 
> 
> 
> Well, ZFS has a separate ZIL for that purpose, and the ZIL adds extra 
> protection / redundancy to the whole pool. 
> 
> But the Cache / L2ARC drive caches all common reads & writes (simply put) 
> onto SSD to improve overall system performance. 
> 
> So I was wondering if one could do this with mdraid or even just EXT3 / EXT4?

Ext3/4 and XFS allow specifying an external log device which if is an SSD can 
speed up writes. All these file systems aggressively use page cache for 
read/write cache. The only thing you don't get is L2ARC type cache, but I heard 
of a dm-cache project that might provide provide that type of cache.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Michael Davis
On 4/15/2011 3:46 PM, Florin Andrei wrote:
> On 04/15/2011 12:30 PM, Jeff wrote:
>> By default Thunderbird creates a local cache for IMAP accounts -- for
>> large accounts, this can be problematic. Have you tried disabling the
>> local synchronization?
>>
>> Account Settings ->   Synch&   Storage ->   Uncheck "Keep messages for this
>> account on this computer"
> It's unchecked already.
>

I experienced a similar problem with Thunderbird on Windows. For me, it 
ended up being folder compaction. Changing the settings on compaction 
(Tools/Options/Advanced/Network & Disk Space) reduced the frequency that 
folders are compacted, and thereby my frustration, but did not eliminate 
them. I agree that the UI should not be affected by maintenance functions.

Hope this helps.

Michael Davis
Profician Corporation

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread John R Pierce
On 04/15/11 12:45 PM, Florin Andrei wrote:
> On 04/15/2011 12:28 PM, John R Pierce wrote:
>> I think T-bird gets locked up when its SENDING mail if the server takes
>> too long to reply at the early stages of the protocol.  that or DNS
>> lookups take too long.
> At least in my case - no and no.
>
> It freezes randomly but pretty often, no relation to sending emails.
>
> The IMAP and SMTP servers are defined by IP address, not hostname. But
> even if that was the case, a software that blocks the UI completely
> while waiting for something in the background? Sounds like 1999 all over
> again.

my local SMTP server is intentionally configured to verify delivery 
addresses before it accepts a mail.  sometimes this causes delays.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Florin Andrei
On 04/15/2011 12:30 PM, Jeff wrote:
>
> By default Thunderbird creates a local cache for IMAP accounts -- for
> large accounts, this can be problematic. Have you tried disabling the
> local synchronization?
>
> Account Settings ->  Synch&  Storage ->  Uncheck "Keep messages for this
> account on this computer"

It's unchecked already.

-- 
Florin Andrei
http://florin.myip.org/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Florin Andrei
On 04/15/2011 12:28 PM, John R Pierce wrote:
>
> I think T-bird gets locked up when its SENDING mail if the server takes
> too long to reply at the early stages of the protocol.  that or DNS
> lookups take too long.

At least in my case - no and no.

It freezes randomly but pretty often, no relation to sending emails.

The IMAP and SMTP servers are defined by IP address, not hostname. But 
even if that was the case, a software that blocks the UI completely 
while waiting for something in the background? Sounds like 1999 all over 
again.

-- 
Florin Andrei
http://florin.myip.org/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Scott Robbins
On Fri, Apr 15, 2011 at 02:30:10PM -0500, Jeff wrote:

> On Fri, Apr 15, 2011 at 2:07 PM, Florin Andrei  wrote:
> > I'm a Thunderbird user almost since day one, but now I'm looking for
> > something else. For whatever reason, it doesn't work well for me - every
> > once in a while it becomes non-responsive (UI completely frozen for
> > several seconds, CPU usage goes to 100%) and I just can't afford to
> > waste time waiting for the email software to start working again.

> By default Thunderbird creates a local cache for IMAP accounts -- for
> large accounts, this can be problematic. Have you tried disabling the
> local synchronization?
> 
> Account Settings -> Synch & Storage -> Uncheck "Keep messages for this
> account on this computer"
> 
There is another setting that can apparently cause high CPU usage.
Preferences>Advanced>General>Advanced Configuration>Enable Global Search
and Indexer (don't have Thunderbird handy, so that path might be
slightly off.)

-- 
Scott Robbins
PGP keyID EB3467D6
( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
gpg --keyserver pgp.mit.edu --recv-keys EB3467D6

Buffy: I can't believe you got into Oxford! 
Willow: It's pretty exciting. 
Oz: That's some deep academia there. 
Buffy: That's where they make Gileses! 
Willow: I know! I can learn, and have scones! 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread Jeff
On Fri, Apr 15, 2011 at 2:07 PM, Florin Andrei  wrote:
> I'm a Thunderbird user almost since day one, but now I'm looking for
> something else. For whatever reason, it doesn't work well for me - every
> once in a while it becomes non-responsive (UI completely frozen for
> several seconds, CPU usage goes to 100%) and I just can't afford to
> waste time waiting for the email software to start working again.
>
> My main desktop platform is Linux, but I need a client that works the
> same and looks the same on Windows too. Email server is IMAP with a
> pretty hefty account: over a hundred folders, hundreds of thousands of
> messages total (server-side filtering with Sieve). Typically it's a
> remote session, over VPN. So the client better work well, and be
> glitch-free.
>
> The issues with Thunderbird might be related to the size of my IMAP
> account, plus the VPN latency - but frankly, I don't care, the client
> needs to hide all that stuff from me, do the updates or whatever in the
> background, instead of blocking the UI until it's done. Ironically, it
> blocked when I was done with this paragraph and I hit Enter. Sticking it
> to the man one last time, I guess.
>
> Any suggestions? Thanks.

By default Thunderbird creates a local cache for IMAP accounts -- for
large accounts, this can be problematic. Have you tried disabling the
local synchronization?

Account Settings -> Synch & Storage -> Uncheck "Keep messages for this
account on this computer"

Or at least that's where it is in Windows T-Bird.

--
Jeff
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cross-platform email client

2011-04-15 Thread John R Pierce
On 04/15/11 12:07 PM, Florin Andrei wrote:
> I'm a Thunderbird user almost since day one, but now I'm looking for
> something else. For whatever reason, it doesn't work well for me - every
> once in a while it becomes non-responsive (UI completely frozen for
> several seconds, CPU usage goes to 100%) and I just can't afford to
> waste time waiting for the email software to start working again.
>


I think T-bird gets locked up when its SENDING mail if the server takes 
too long to reply at the early stages of the protocol.  that or DNS 
lookups take too long.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] cross-platform email client

2011-04-15 Thread Florin Andrei
I'm a Thunderbird user almost since day one, but now I'm looking for 
something else. For whatever reason, it doesn't work well for me - every 
once in a while it becomes non-responsive (UI completely frozen for 
several seconds, CPU usage goes to 100%) and I just can't afford to 
waste time waiting for the email software to start working again.

My main desktop platform is Linux, but I need a client that works the 
same and looks the same on Windows too. Email server is IMAP with a 
pretty hefty account: over a hundred folders, hundreds of thousands of 
messages total (server-side filtering with Sieve). Typically it's a 
remote session, over VPN. So the client better work well, and be 
glitch-free.

The issues with Thunderbird might be related to the size of my IMAP 
account, plus the VPN latency - but frankly, I don't care, the client 
needs to hide all that stuff from me, do the updates or whatever in the 
background, instead of blocking the UI until it's done. Ironically, it 
blocked when I was done with this paragraph and I hit Enter. Sticking it 
to the man one last time, I guess.

Any suggestions? Thanks.

-- 
Florin Andrei
http://florin.myip.org/
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 5.6 - SRPM's

2011-04-15 Thread Filipe Rosset
On 04/13/2011 07:54 AM, Karanbir Singh wrote:
> 
> They are definitely in there, just slow.
> 
> - KB

Hi guys,

Still without SRPM's in 5.6/os/SRPMS/

-- 
Filipe
Rio Grande do Sul, Brazil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Rudi Ahlers
On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker  wrote:

> On Apr 15, 2011, at 9:17 AM, Rudi Ahlers  wrote:
>
>
>
> On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan 
> <
> christopher.c...@bradbury.edu.hk> wrote:
>
>> On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
>> > On 04/14/2011 09:00 PM, Christopher Chan wrote:
>> >>
>> >> Wanna try that again with 64MB of cache only and tell us whether there
>> >> is a difference in performance?
>> >>
>> >> There is a reason why 3ware 85xx cards were complete rubbish when used
>> >> for raid5 and which led to the 95xx/96xx series.
>> >> _
>> >
>> > I don't happen to have any systems I can test with the 1.5TB drives
>> > without controller cache right now, but I have a system with some old
>> > 500GB drives  (which are about half as fast as the 1.5TB drives in
>> > individual sustained I/O throughput) attached directly to onboard SATA
>> > ports in a 8 x RAID6 with *no* controller cache at all. The machine has
>> > 16GB of RAM and bonnie++ therefore used 32GB of data for the test.
>> >
>> > Version  1.96   --Sequential Output-- --Sequential Input-
>> > --Random-
>> > Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> > --Seeks--
>> > MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>> > /sec %CP
>> > pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
>> > 590.5  11
>> > Latency 24190us1244ms1580ms   60411us   69901us
>> > 42586us
>> > Version  1.96   --Sequential Create-- Random
>> > Create
>> > pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
>> > -Delete--
>> > files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>> > /sec %CP
>> >16 10910  31 + +++ + +++ 29293  80 + +++
>> > + +++
>> > Latency   775us 610us 979us 740us 370us
>> > 380us
>> >
>> > Given that the underlaying drives are effectively something like half as
>> > fast as the drives in the other test, the results are quite comparable.
>>
>> Woohoo, next we will be seeing md raid6 also giving comparable results
>> if that is the case. I am not the only person on this list that thinks
>> cache is king for raid5/6 on hardware raid boards and the using hardware
>> raid + bbu cache for better performance one of the two reasons why we
>> don't do md raid5/6.
>>
>>
>> >
>> > Cache doesn't make a lot of difference when you quickly write a lot more
>> > data than the cache can hold. The limiting factor becomes the slowest
>> > component - usually the drives themselves. Cache isn't magic performance
>> > pixie dust. It helps in certain use cases and is nearly irrelevant in
>> > others.
>> >
>>
>> Yeah, you are right - but cache is primarily to buffer the writes for
>> performance. Why else go through the expense of getting bbu cache? So
>> what happens when you tweak bonnie a bit?
>> ___
>>
>>
>
> As matter of interest, does anyone know how to use an SSD drive for cach
> purposes on Linux software RAID  drives? ZFS has this feature and it makes a
> helluva difference to a storage server's performance.
>
>
> Put the file system's log device on it.
>
> -Ross
>
>
> ___
>



Well, ZFS has a separate ZIL for that purpose, and the ZIL adds extra
protection / redundancy to the whole pool.

But the Cache / L2ARC drive caches all common reads & writes (simply put)
onto SSD to improve overall system performance.

So I was wondering if one could do this with mdraid or even just EXT3 /
EXT4?



-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Ross Walker
On Apr 15, 2011, at 9:17 AM, Rudi Ahlers  wrote:

> 
> 
> On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan 
>  wrote:
> On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
> > On 04/14/2011 09:00 PM, Christopher Chan wrote:
> >>
> >> Wanna try that again with 64MB of cache only and tell us whether there
> >> is a difference in performance?
> >>
> >> There is a reason why 3ware 85xx cards were complete rubbish when used
> >> for raid5 and which led to the 95xx/96xx series.
> >> _
> >
> > I don't happen to have any systems I can test with the 1.5TB drives
> > without controller cache right now, but I have a system with some old
> > 500GB drives  (which are about half as fast as the 1.5TB drives in
> > individual sustained I/O throughput) attached directly to onboard SATA
> > ports in a 8 x RAID6 with *no* controller cache at all. The machine has
> > 16GB of RAM and bonnie++ therefore used 32GB of data for the test.
> >
> > Version  1.96   --Sequential Output-- --Sequential Input-
> > --Random-
> > Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> > --Seeks--
> > MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> > /sec %CP
> > pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
> > 590.5  11
> > Latency 24190us1244ms1580ms   60411us   69901us
> > 42586us
> > Version  1.96   --Sequential Create-- Random
> > Create
> > pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
> > -Delete--
> > files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> > /sec %CP
> >16 10910  31 + +++ + +++ 29293  80 + +++
> > + +++
> > Latency   775us 610us 979us 740us 370us
> > 380us
> >
> > Given that the underlaying drives are effectively something like half as
> > fast as the drives in the other test, the results are quite comparable.
> 
> Woohoo, next we will be seeing md raid6 also giving comparable results
> if that is the case. I am not the only person on this list that thinks
> cache is king for raid5/6 on hardware raid boards and the using hardware
> raid + bbu cache for better performance one of the two reasons why we
> don't do md raid5/6.
> 
> 
> >
> > Cache doesn't make a lot of difference when you quickly write a lot more
> > data than the cache can hold. The limiting factor becomes the slowest
> > component - usually the drives themselves. Cache isn't magic performance
> > pixie dust. It helps in certain use cases and is nearly irrelevant in
> > others.
> >
> 
> Yeah, you are right - but cache is primarily to buffer the writes for
> performance. Why else go through the expense of getting bbu cache? So
> what happens when you tweak bonnie a bit?
> ___
> 
> 
> 
> As matter of interest, does anyone know how to use an SSD drive for cach 
> purposes on Linux software RAID  drives? ZFS has this feature and it makes a 
> helluva difference to a storage server's performance. 

Put the file system's log device on it.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Nataraj
On 04/15/2011 04:08 AM, Johnny Hughes wrote:
> On 04/14/2011 06:23 AM, Mailing List wrote:
>> On 4/14/2011 6:47 AM, Johnny Hughes wrote:
>>> Is it really true that the time is working perfectly with one of the
>>> other kernels (the older ones)?
>>>
>>>
>>>
>>Johnny,
>>
>>   Yes, As long as I run the older 5.5 kernel my time is perfect. All
>> clients can get from this machine with no issues. As soon as I run new
>> kernel, or Plus kernel for that matter. The time goes downhill. "Uphill
>> actually"
>>   
>> To answer the previous question I do have the HW clock set to utc,
>> Everything is stock from initial install of the package.
>>
>> Brian.
> I do not see anything from Dell that is a model C151.
>
> I also do not see anything in the RH bugzilla that is problematic for
> older AMD processors and the clock, unless running KVM type virtual
> machines.
>
> Is this a VM or regular install?
>
> If this a real machine, do you have the latest BIOS from Dell?
>
> Do you have any special kernel options in grub?
>
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
It also occured to me to ask if this was running in a VM, but it sounded
like it was running on actual hardware.I once had a vmware VM in
which I had similar misbehavior of the clock.  Eventually I discovered
that the following simple program when run inside the VM would return
immediately instead of delaying for 10 seconds as it should.

#include 
/* #include  */
#include 
#include 
#include 


int main() {
fd_set set;
struct timeval timeout;
int filedes = STDIN_FILENO;


FD_ZERO (&set);
FD_SET (filedes, &set);


timeout.tv_sec = 10;
timeout.tv_usec = 0;

select(FD_SETSIZE, &set, NULL, NULL, &timeout);

}


I then found out that the ISP had set the host OS for my VM to Ubuntu
when I was running CentOS 5 in the VM.  The cause was that VMware
assumed a tickless kernel for Ubuntu, but not for CentOS 5 and there
were optimizations in the VM emulation that counted on VMware knowing
what timekeeping options where set in the kernel.

Nataraj

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
> Not sure the outcome of copying the yum directory. I would have just
> run yum clean all then yum update.

Ah, thanks, I will put that in my personal Wiki for future reference. 
Noob here and it is a test environment at home :). Thanks for your help.

-- 
Kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Cal Webster
On Fri, 2011-04-15 at 11:07 -0400, Ryan Wagoner wrote:
> On Fri, Apr 15, 2011 at 11:00 AM, Hans Vos  wrote:
> > Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to
> > server 2. Then run yum update and all kinds of errors came flying at me.
> > So I just SCP'ed the whole /var/cache/yum directory of server 1 to
> > server 2. Ran yum update and there were the updates I was missing
> > including the new kernel-xen. I don't know if this was the *proper* way
> > of fixing it but it did the job :P.
> >
> 
> Not sure the outcome of copying the yum directory. I would have just
> run yum clean all then yum update.
> 
> Ryan

+1

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Cal Webster
On Fri, 2011-04-15 at 17:00 +0200, Hans Vos wrote:
> Hello,
> 
> > Ryan is right. The mirrors need to sync up. That's most likely the
> > cause. Still, it's curious why you have two kernels listed in grub.conf
> > and only one listed from yum. You should also see the 2.6.18-238.el5xen
> > kernel listed.
> 
> Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to 
> server 2. Then run yum update and all kinds of errors came flying at me. 
> So I just SCP'ed the whole /var/cache/yum directory of server 1 to 
> server 2. Ran yum update and there were the updates I was missing 
> including the new kernel-xen. I don't know if this was the *proper* way 
> of fixing it but it did the job :P.
> 
You shouldn't need to copy the timedhosts.txt file the "fastestmirrors"
yum plugin should recreate it. You might check /var/log/yum.log
or /var/log/messages to make some sense of the errors. I don't see any
harm in using the cache from the other machine, though.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread Howard Fleming


On 4/15/2011 10:48, John R Pierce wrote:
> On 04/15/11 4:38 AM, Howard Fleming wrote:
>> I have used several Best Ferrups UPSs over the years, other than one
>> that toasted it's transformer, have never had any trouble out of them
>> (just replace the battery every 3 to 4 years).
>>
>> They are picky about their input power, do not run not connect them to
>> an auto regulating transformer (not the proper term?), or on the output
>> of other UPSs, it can cause interesting problems...
>
> I don't think they make any FerrUPS anymore.  Those were based on a
> massive ferroresonant transformer which, yes, is very sensitive to the
> input frequency.  Specifically, they don't like generator power, unless
> it has extremely well regulated frequency output (such as a DC generator
> with a digital sinusoidal converter)

Eaton has the Ferrups line now (still available as far as I know).

I have actually run into the input frequency problem in the past with 
the Ferrups.

I was working at a gas company that for political reasons generated 
their own power inhouse.  Had one Ferrups UPS (of 10?) that was 
complaining about it (kept going online/offline/online  There is a 
parameter in the settings that can be adjusted to allow a greater input 
freq range on the UPS (59.5 - 60.5 hz is the default range, from what I 
remember).  In this case that took care of the problem.

I have 3 1.4kw units at home, no trouble to date running them off of my 
backup generator (Campbell 5k unit).  They are also 18 years old at this 
point and still going... :o).  Running 3 of my CentOS servers at home in 
fact.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Ryan Wagoner
On Fri, Apr 15, 2011 at 11:00 AM, Hans Vos  wrote:
> Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to
> server 2. Then run yum update and all kinds of errors came flying at me.
> So I just SCP'ed the whole /var/cache/yum directory of server 1 to
> server 2. Ran yum update and there were the updates I was missing
> including the new kernel-xen. I don't know if this was the *proper* way
> of fixing it but it did the job :P.
>

Not sure the outcome of copying the yum directory. I would have just
run yum clean all then yum update.

Ryan
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
Hello,

> Ryan is right. The mirrors need to sync up. That's most likely the
> cause. Still, it's curious why you have two kernels listed in grub.conf
> and only one listed from yum. You should also see the 2.6.18-238.el5xen
> kernel listed.

Well, I copied the /var/cache/yum/timedhosts.txt file from server 1 to 
server 2. Then run yum update and all kinds of errors came flying at me. 
So I just SCP'ed the whole /var/cache/yum directory of server 1 to 
server 2. Ran yum update and there were the updates I was missing 
including the new kernel-xen. I don't know if this was the *proper* way 
of fixing it but it did the job :P.

-- 
Kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Cal Webster
On Fri, 2011-04-15 at 16:37 +0200, Hans Vos wrote:
> Hello Cal,
> 
> Thank you for your reply.
> 
> > It's possible that your #2 server has not rebooted or had problems with
> > the latest kernel or just has the default set to something other than
> > "0" in grub.conf.
> 
> I did a reboot and checked the grub.conf. Should have mentioned that.
> 
> > What's the output of:
> >
> > egrep 'default|title' /etc/grub.conf
> 
> # egrep 'default|title' /etc/grub.conf
> default=0
> title CentOS (2.6.18-238.5.1.el5xen)
> title CentOS (2.6.18-238.el5xen)
> 
> > yum list kernel | grep kernel
> 
> yum list kernel | grep kernel
> kernel.x86_64 2.6.18-238.5.1.el5 
>   updates

Ryan is right. The mirrors need to sync up. That's most likely the
cause. Still, it's curious why you have two kernels listed in grub.conf
and only one listed from yum. You should also see the 2.6.18-238.el5xen
kernel listed.

./Cal

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread John R Pierce
On 04/15/11 4:38 AM, Howard Fleming wrote:
> I have used several Best Ferrups UPSs over the years, other than one
> that toasted it's transformer, have never had any trouble out of them
> (just replace the battery every 3 to 4 years).
>
> They are picky about their input power, do not run not connect them to
> an auto regulating transformer (not the proper term?), or on the output
> of other UPSs, it can cause interesting problems...

I don't think they make any FerrUPS anymore.  Those were based on a 
massive ferroresonant transformer which, yes, is very sensitive to the 
input frequency.  Specifically, they don't like generator power, unless 
it has extremely well regulated frequency output (such as a DC generator 
with a digital sinusoidal converter)


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
Hello,

> The 9.1 kernel update was released last night. The mirrors must still
> be catching up. I was update to update one box to 9.1 and my other
> boxes still don't see the 9.1 update.

Ah, that might also explain why on server 1 there were 51 updates and on 
server 2 only 33. Could not figure out why. Was comparing installed 
packages also.

-- 
Kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
Hello Cal,

Thank you for your reply.

> It's possible that your #2 server has not rebooted or had problems with
> the latest kernel or just has the default set to something other than
> "0" in grub.conf.

I did a reboot and checked the grub.conf. Should have mentioned that.

> What's the output of:
>
> egrep 'default|title' /etc/grub.conf

# egrep 'default|title' /etc/grub.conf
default=0
title CentOS (2.6.18-238.5.1.el5xen)
title CentOS (2.6.18-238.el5xen)

> yum list kernel | grep kernel

yum list kernel | grep kernel
kernel.x86_64 2.6.18-238.5.1.el5 
  updates

Also if I do "yum info kernel-xen" I get this on server 1:

Name   : kernel-xen
Arch   : x86_64
Version: 2.6.18
Release: 238.9.1.el5
Size   : 95 M
Repo   : installed
Summary: The Linux kernel compiled for Xen VM operations
URL: http://www.kernel.org/
License: GPLv2
Description: This package includes a version of the Linux kernel which
: runs in Xen VM. It works for both priviledged and 
unpriviledged guests.

And this on server 2:

Name   : kernel-xen
Arch   : x86_64
Version: 2.6.18
Release: 238.5.1.el5
Size   : 95 M
Repo   : installed
Summary: The Linux kernel compiled for Xen VM operations
URL: http://www.kernel.org/
License: GPLv2
Description: This package includes a version of the Linux kernel which
: runs in Xen VM. It works for both priviledged and 
unpriviledged guests.

-- 
Kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Ryan Wagoner
On Fri, Apr 15, 2011 at 10:18 AM, Hans Vos  wrote:
> Hello,
>
> Earlier this week I installed a test server with CentOS 5.6 with
> Virtualization enabled during the installer. Today I installed another
> server using the same method (they are identical servers). I just did a
> yum update and I found something curious. Both servers have a different
> kernel. Server 1 is at 9.1 version and server 2 at 5.1. How can this be?
> How to I get the latest version on server 2? If I run yum update there
> are none available.

The 9.1 kernel update was released last night. The mirrors must still
be catching up. I was update to update one box to 9.1 and my other
boxes still don't see the 9.1 update.

Ryan
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Cal Webster
It's possible that your #2 server has not rebooted or had problems with
the latest kernel or just has the default set to something other than
"0" in grub.conf.

What's the output of:

egrep 'default|title' /etc/grub.conf

yum list kernel | grep kernel

./Cal

On Fri, 2011-04-15 at 16:18 +0200, Hans Vos wrote:
> Hello,
> 
> Earlier this week I installed a test server with CentOS 5.6 with 
> Virtualization enabled during the installer. Today I installed another 
> server using the same method (they are identical servers). I just did a 
> yum update and I found something curious. Both servers have a different 
> kernel. Server 1 is at 9.1 version and server 2 at 5.1. How can this be? 
> How to I get the latest version on server 2? If I run yum update there 
> are none available.
> 
> If I input xm info I get this one server 1:
> 
> host   : server1
> release: 2.6.18-238.9.1.el5xen
> version: #1 SMP Tue Apr 12 18:53:56 EDT 2011
> machine: x86_64
> nr_cpus: 4
> nr_nodes   : 1
> sockets_per_node   : 1
> cores_per_socket   : 4
> threads_per_core   : 1
> cpu_mhz: 2400
> hw_caps: 
> bfebfbff:20100800::0940:e3bd::0001
> total_memory   : 4095
> free_memory: 383
> node_to_cpu: node0:0-3
> xen_major  : 3
> xen_minor  : 1
> xen_extra  : .2-238.9.1.el5
> xen_caps   : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_pagesize   : 4096
> platform_params: virt_start=0x8000
> xen_changeset  : unavailable
> cc_compiler: gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
> cc_compile_by  : mockbuild
> cc_compile_domain  : centos.org
> cc_compile_date: Tue Apr 12 18:01:03 EDT 2011
> xend_config_format : 2
> 
> And on server 2 it is this:
> 
> host   : server2
> release: 2.6.18-238.5.1.el5xen
> version: #1 SMP Fri Apr 1 19:35:13 EDT 2011
> machine: x86_64
> nr_cpus: 4
> nr_nodes   : 1
> sockets_per_node   : 1
> cores_per_socket   : 4
> threads_per_core   : 1
> cpu_mhz: 2400
> hw_caps: 
> bfebfbff:20100800::0940:e3bd::0001
> total_memory   : 4095
> free_memory: 383
> node_to_cpu: node0:0-3
> xen_major  : 3
> xen_minor  : 1
> xen_extra  : .2-238.5.1.el5
> xen_caps   : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_pagesize   : 4096
> platform_params: virt_start=0x8000
> xen_changeset  : unavailable
> cc_compiler: gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
> cc_compile_by  : mockbuild
> cc_compile_domain  : centos.org
> cc_compile_date: Fri Apr  1 18:30:53 EDT 2011
> xend_config_format : 2
> 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] Two cleanly installed CentOS 5.6 servers but with different Xen kernel versions

2011-04-15 Thread Hans Vos
Hello,

Earlier this week I installed a test server with CentOS 5.6 with 
Virtualization enabled during the installer. Today I installed another 
server using the same method (they are identical servers). I just did a 
yum update and I found something curious. Both servers have a different 
kernel. Server 1 is at 9.1 version and server 2 at 5.1. How can this be? 
How to I get the latest version on server 2? If I run yum update there 
are none available.

If I input xm info I get this one server 1:

host   : server1
release: 2.6.18-238.9.1.el5xen
version: #1 SMP Tue Apr 12 18:53:56 EDT 2011
machine: x86_64
nr_cpus: 4
nr_nodes   : 1
sockets_per_node   : 1
cores_per_socket   : 4
threads_per_core   : 1
cpu_mhz: 2400
hw_caps: 
bfebfbff:20100800::0940:e3bd::0001
total_memory   : 4095
free_memory: 383
node_to_cpu: node0:0-3
xen_major  : 3
xen_minor  : 1
xen_extra  : .2-238.9.1.el5
xen_caps   : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_pagesize   : 4096
platform_params: virt_start=0x8000
xen_changeset  : unavailable
cc_compiler: gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
cc_compile_by  : mockbuild
cc_compile_domain  : centos.org
cc_compile_date: Tue Apr 12 18:01:03 EDT 2011
xend_config_format : 2

And on server 2 it is this:

host   : server2
release: 2.6.18-238.5.1.el5xen
version: #1 SMP Fri Apr 1 19:35:13 EDT 2011
machine: x86_64
nr_cpus: 4
nr_nodes   : 1
sockets_per_node   : 1
cores_per_socket   : 4
threads_per_core   : 1
cpu_mhz: 2400
hw_caps: 
bfebfbff:20100800::0940:e3bd::0001
total_memory   : 4095
free_memory: 383
node_to_cpu: node0:0-3
xen_major  : 3
xen_minor  : 1
xen_extra  : .2-238.5.1.el5
xen_caps   : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_pagesize   : 4096
platform_params: virt_start=0x8000
xen_changeset  : unavailable
cc_compiler: gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)
cc_compile_by  : mockbuild
cc_compile_domain  : centos.org
cc_compile_date: Fri Apr  1 18:30:53 EDT 2011
xend_config_format : 2

-- 
Met vriendelijke groet / With kind regards,

Hans Vos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Jerry Franz
On 04/15/2011 06:05 AM, Christopher Chan wrote:
>
> Woohoo, next we will be seeing md raid6 also giving comparable results
> if that is the case. I am not the only person on this list that thinks
> cache is king for raid5/6 on hardware raid boards and the using hardware
> raid + bbu cache for better performance one of the two reasons why we
> don't do md raid5/6.
>
>

That *is* md RAID6. Sorry I didn't make that clear. I don't use anyone's 
hardware RAID6 right now because I haven't found a board so far that was 
as fast as using md. I get better performance from even a BBU backed 95X 
series 3ware board by using it to serve the drives as JBOD and then 
using md to do the actual raid.

> Yeah, you are right - but cache is primarily to buffer the writes for
> performance. Why else go through the expense of getting bbu cache? So
> what happens when you tweak bonnie a bit?

For smaller writes. When writes *do* fit in the cache you get a big 
bump. As I said: Helps some cases, not all cases. BBU backed cache helps 
if you have lots of small writes. Not so much if you are writing 
gigabytes of stuff more sequentially.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and MSSQL

2011-04-15 Thread John Beranek
On 15/04/11 13:49, Phil Schaffner wrote:
> John Beranek wrote on 04/15/2011 07:45 AM:
>> On 15/04/11 12:23, John Beranek wrote:
>>> [Reposted now I've joined the list, so I hopefully don't get moderated out]
>>>
>>> Hi,
>>>
>>> I've upgraded lots of machines to 5.6 (thanks!) and there was one
>>> particular machine that I'd also like to upgrade to PHP 5.3.
>>> Unfortunately it seems I can't.
>>>
>>> On the machine I have php-mssql installed, and it appears that there is
>>> no php53-mssql.
>>
>> I was going to see if I could rebuild the php53 SRPM support with MSSQL
>> support, until I found that the SRPMs still aren't available on the
>> CentOS mirrors yet. Downloading the upstream RPM now, will see how that
>> goes...
> 
> 
> I sound like a shill for IUS this morning - not the case I assure you - 
> but they have php53u-mssql-5.3.6-1.ius.el5

Well, I've now rebuilt the RHEL SRPM with mssql support. It's now built
in the openSUSE Build Service at:

https://build.opensuse.org/package/show?package=php53&project=home%3Ajohnberanek%3Aphp53_centos

Not ideal in that it's the while php53 SRPM, and additionally because
OBS is currently building with CentOS 5.5 instead of 5.6. The latter
issue has brought me to raise a bug in the OBS Bugzilla:

https://bugzilla.novell.com/show_bug.cgi?id=687848
Update CentOS build to 5.6

Installed my built PHP 5.3 RPMs on the machine I wanted them on -
painful! Why do you need to remove the PHP 5.1 RPMs before you can
install the 'php53' ones, surely the php53 RPMs could have had
"Deprecated" lines!?

John.

-- 
John Beranek To generalise is to be an idiot.
http://redux.org.uk/ -- William Blake



smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Rudi Ahlers
On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan <
christopher.c...@bradbury.edu.hk> wrote:

> On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
> > On 04/14/2011 09:00 PM, Christopher Chan wrote:
> >>
> >> Wanna try that again with 64MB of cache only and tell us whether there
> >> is a difference in performance?
> >>
> >> There is a reason why 3ware 85xx cards were complete rubbish when used
> >> for raid5 and which led to the 95xx/96xx series.
> >> _
> >
> > I don't happen to have any systems I can test with the 1.5TB drives
> > without controller cache right now, but I have a system with some old
> > 500GB drives  (which are about half as fast as the 1.5TB drives in
> > individual sustained I/O throughput) attached directly to onboard SATA
> > ports in a 8 x RAID6 with *no* controller cache at all. The machine has
> > 16GB of RAM and bonnie++ therefore used 32GB of data for the test.
> >
> > Version  1.96   --Sequential Output-- --Sequential Input-
> > --Random-
> > Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> > --Seeks--
> > MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> > /sec %CP
> > pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
> > 590.5  11
> > Latency 24190us1244ms1580ms   60411us   69901us
> > 42586us
> > Version  1.96   --Sequential Create-- Random
> > Create
> > pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
> > -Delete--
> > files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> > /sec %CP
> >16 10910  31 + +++ + +++ 29293  80 + +++
> > + +++
> > Latency   775us 610us 979us 740us 370us
> > 380us
> >
> > Given that the underlaying drives are effectively something like half as
> > fast as the drives in the other test, the results are quite comparable.
>
> Woohoo, next we will be seeing md raid6 also giving comparable results
> if that is the case. I am not the only person on this list that thinks
> cache is king for raid5/6 on hardware raid boards and the using hardware
> raid + bbu cache for better performance one of the two reasons why we
> don't do md raid5/6.
>
>
> >
> > Cache doesn't make a lot of difference when you quickly write a lot more
> > data than the cache can hold. The limiting factor becomes the slowest
> > component - usually the drives themselves. Cache isn't magic performance
> > pixie dust. It helps in certain use cases and is nearly irrelevant in
> > others.
> >
>
> Yeah, you are right - but cache is primarily to buffer the writes for
> performance. Why else go through the expense of getting bbu cache? So
> what happens when you tweak bonnie a bit?
> ___
>
>

As matter of interest, does anyone know how to use an SSD drive for cach
purposes on Linux software RAID  drives? ZFS has this feature and it makes a
helluva difference to a storage server's performance.



-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
> On 04/14/2011 09:00 PM, Christopher Chan wrote:
>>
>> Wanna try that again with 64MB of cache only and tell us whether there
>> is a difference in performance?
>>
>> There is a reason why 3ware 85xx cards were complete rubbish when used
>> for raid5 and which led to the 95xx/96xx series.
>> _
>
> I don't happen to have any systems I can test with the 1.5TB drives
> without controller cache right now, but I have a system with some old
> 500GB drives  (which are about half as fast as the 1.5TB drives in
> individual sustained I/O throughput) attached directly to onboard SATA
> ports in a 8 x RAID6 with *no* controller cache at all. The machine has
> 16GB of RAM and bonnie++ therefore used 32GB of data for the test.
>
> Version  1.96   --Sequential Output-- --Sequential Input-
> --Random-
> Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26
> 590.5  11
> Latency 24190us1244ms1580ms   60411us   69901us
> 42586us
> Version  1.96   --Sequential Create-- Random
> Create
> pbox3   -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>16 10910  31 + +++ + +++ 29293  80 + +++
> + +++
> Latency   775us 610us 979us 740us 370us
> 380us
>
> Given that the underlaying drives are effectively something like half as
> fast as the drives in the other test, the results are quite comparable.

Woohoo, next we will be seeing md raid6 also giving comparable results 
if that is the case. I am not the only person on this list that thinks 
cache is king for raid5/6 on hardware raid boards and the using hardware 
raid + bbu cache for better performance one of the two reasons why we 
don't do md raid5/6.


>
> Cache doesn't make a lot of difference when you quickly write a lot more
> data than the cache can hold. The limiting factor becomes the slowest
> component - usually the drives themselves. Cache isn't magic performance
> pixie dust. It helps in certain use cases and is nearly irrelevant in
> others.
>

Yeah, you are right - but cache is primarily to buffer the writes for 
performance. Why else go through the expense of getting bbu cache? So 
what happens when you tweak bonnie a bit?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Peter Kjellström
On Thursday, April 14, 2011 05:26:41 PM Ross Walker wrote:
> 2011/4/14 Peter Kjellström :
...
> > While I do concede the obvious point regarding rebuild time (raid6 takes
> > from long to very long to rebuild) I'd like to point out:
> > 
> >  * If you do the math for a 12 drive raid10 vs raid6 then (using actual
> > data from ~500 1T drives on HP cciss controllers during two years)
> > raid10 is ~3x more likely to cause hard data loss than raid6.
> > 
> >  * mtbf is not everything there's also the thing called unrecoverable
> > read errors. If you hit one while rebuilding your raid10 you're toast
> > while in the raid6 case you'll use your 2nd parity and continue the
> > rebuild.
> 
> You mean if the other side of the mirror fails while rebuilding it.

No, the drive (unrecoverably) failing to read a sector is not the same thing 
as a drive failure. Drive failure frequency expressed in mtbf is around 1M 
hours (even though including predictive fail we see more like 250K hours). 
Unrecoverable read error rate (per sector) was quite recently on the order of 
1x to 10x of the drive size (a drive I looked up now was spec'ed alot higher 
at ~1000x drive size). If we assume a raid10 rebuild time of 12h and an 
unrecoverable read error once every 10x of drive size then the effective mean 
time between read error is 120h (two to ten thousand times worse than the 
drive mtbf). Admittedly these numbers are hard to get and equally hard to 
trust (or double check).

What it all comes down to is that raid10 (assuming just double- not tripple 
copy) stores your data with one extra copy/parity and in a single drive 
failure scenario you have zero extra data left (on that part of the array). 
That is, you depend on each and every bit of that (meaning the degraded part) 
data being correctly read. This means you very much want both:

 1) Very fast rebuilds (=> you need hot-spare)
 2) An unrecoverable read error rate much larger than your drive size

or as you suggest below:

 3) Tripple copy

> Yes this is true, of course if this happens with RAID6 it will rebuild
> from parity IF there is a second hotspare available,

This is wrong, hot-spares are not that necessary when using raid6. This has to 
do with the fact that rebuild times (time from you start being vulnerable to 
whatever rebuild completes) are already long. An added 12h for a tech to swap 
in the spare only marginally increases your risks.

> cause remember
> the first failure wasn't cleared before the second failure occurred.
> Now your RAID6 is in severe degraded state, one more failure before
> either of these disks is rebuilt will mean toast for the array.

All of this was taken into account in my original example above. In the end 
(with my data) raid10 was around 3x more likely to cause ultimate data loss 
than raid6.

> Now
> the performance of the array is practically unusable and the load on
> the disks is high as it does a full recalculation rebuild, and if they
> are large it will be high for a very long time, now if any other disk
> in the very large RAID6 array is near failure, or has a bad sector,
> this taxing load could very well push it over the edge

In my example a 12 drive raid6 rebuild takes 6-7 days this works out to < 5 
MB/s seq read per drive. This added load is not very noticeable in our 
environment (taking into account normal patrol reads and user data traffic).

Either way, the general problem of "[rebuild stress] pushing drives over the 
edge" is a larger threat to raid10 than raid6 (it being fatal in the first 
case...).

> and the risk of
> such an event occurring increases with the size of the array and the
> size of the disk surface.
> 
> I think this is where the mdraid raid10 shines because it can have 3
> copies (or more) of the data instead of just two,

I think we've now moved into what most people would call unreasonable. Let's 
see what we have for a 12 drive box (quite common 2U size):

 raid6: 12x on raid6 no hot spare (see argument above) => 10 data drives
 raid10: 11x tripple store on raid10 one spare => 3.66 data drives

or (if your raid's not odd-drive capable):

 raid10: 9x tripple store on raid10 one to three spares => 3 data drives

(ok, yes you could get 4 data drives out of it if you skipped hot-spare)

That is almost a 2.7x-3.3x diff! My users sure care if their X $ results in 
1/3 the space (or cost => 3x for the same space if you prefer).

On top of this most raid implementations for raid10 lacks tripple copy 
functionality.

Also note that raid10 that allows for odd number of drives is more vulnerable 
to 2nd drive failures resulting in an even larger than 3x improvement using 
raid6 (vs double copy odd drive handling raid10).

/Peter

> of course a three
> times (or more) the cost. It also allows for uneven number of disks as
> it just saves copies on different spindles rather then "mirrors". This
> I think provides the best protection against failure and the best
> performance, but at the worst cost, but 

Re: [CentOS] php53 and MSSQL

2011-04-15 Thread Phil Schaffner
John Beranek wrote on 04/15/2011 07:45 AM:
> On 15/04/11 12:23, John Beranek wrote:
>> [Reposted now I've joined the list, so I hopefully don't get moderated out]
>>
>> Hi,
>>
>> I've upgraded lots of machines to 5.6 (thanks!) and there was one
>> particular machine that I'd also like to upgrade to PHP 5.3.
>> Unfortunately it seems I can't.
>>
>> On the machine I have php-mssql installed, and it appears that there is
>> no php53-mssql.
>
> I was going to see if I could rebuild the php53 SRPM support with MSSQL
> support, until I found that the SRPMs still aren't available on the
> CentOS mirrors yet. Downloading the upstream RPM now, will see how that
> goes...


I sound like a shill for IUS this morning - not the case I assure you - 
but they have php53u-mssql-5.3.6-1.ius.el5

Probably will not work unless you uninstall php or php53 and install 
their whole set.

Phil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and mcrypt

2011-04-15 Thread Phil Schaffner
Rainer Traut wrote on 04/15/2011 07:55 AM:
...
> Yeah, I had the same problem with missing php_mcrypt. ;)
> I did a full rebuild of php53 with patched spec so that it produces
> php53_mcrypt but that is not very elegant.
> The more elegant way to do it is to make an rpm for only the missing
> modules like EPEL's "php-extras".
> So I'm interested in this, too.

Another possibility is using what IUS has already done and installing 
php53u packages.  See the following CentOS forum thread for details:
https://www.centos.org/modules/newbb/viewtopic.php?viewmode=flat&topic_id=30881&forum=38

Phil
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and mcrypt

2011-04-15 Thread Rainer Traut
Am 15.04.2011 13:32, schrieb Geoff Galitz:
> More PHP fun!
> I can see in the spec files that php-mcrypt support was removed by
> Redhat. I tried to find out why but I don't have sufficient access to
> redhat bugzilla. I am wondering if it is actually necessary as I have
> also run across a post or two that indicates applications that rely on
> mcrypt still work with the new php53.
> Perhaps mcrypt was superceded by another module or PHP core code?

Yeah, I had the same problem with missing php_mcrypt. ;)
I did a full rebuild of php53 with patched spec so that it produces 
php53_mcrypt but that is not very elegant.
The more elegant way to do it is to make an rpm for only the missing 
modules like EPEL's "php-extras".
So I'm interested in this, too.

Rainer
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and MSSQL

2011-04-15 Thread John Beranek
On 15/04/11 12:23, John Beranek wrote:
> [Reposted now I've joined the list, so I hopefully don't get moderated out]
> 
> Hi,
> 
> I've upgraded lots of machines to 5.6 (thanks!) and there was one
> particular machine that I'd also like to upgrade to PHP 5.3.
> Unfortunately it seems I can't.
> 
> On the machine I have php-mssql installed, and it appears that there is
> no php53-mssql.

I was going to see if I could rebuild the php53 SRPM support with MSSQL
support, until I found that the SRPMs still aren't available on the
CentOS mirrors yet. Downloading the upstream RPM now, will see how that
goes...

John.

-- 
John Beranek To generalise is to be an idiot.
http://redux.org.uk/ -- William Blake



smime.p7s
Description: S/MIME Cryptographic Signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread Howard Fleming
Some of the newer HP servers are very picky about power from UPS's, from 
what I have read.

I have used several Best Ferrups UPSs over the years, other than one 
that toasted it's transformer, have never had any trouble out of them 
(just replace the battery every 3 to 4 years).

They are picky about their input power, do not run not connect them to 
an auto regulating transformer (not the proper term?), or on the output 
of other UPSs, it can cause interesting problems

Howard

On 4/14/2011 15:33, Lamar Owen wrote:
> On Thursday, April 14, 2011 02:55:51 PM John R Pierce wrote:
>> http://powerquality.eaton.com/Products-services/Backup-Power-UPS/5125.aspx
>> or similar for this application.   I'd take one of those up versus the
>> same size APC SmartUps any day.
>
> We have a 5KVA Best Ferrups here that has never worked correctly :-)  But 
> I've seen my share of toasted APC's, too.
>
> Currently we run older APC SmartUPS (pure sine) for the workstation stuff and 
> Symmetras in the Data Centers.  Looking to put in a Toshiba or similar 500KVA 
> in the secondary Data Center later in the year.
>
>> BTW, another thing the 'good' UPS's do, more important than 'pure
>> sinusoidal output' for computer purposes*, is buck/boost voltage regulation.
>
> Yes.
>
>> * if you're running audio gear off a UPS, you definitely want the
>> sinusoidal output, but thats another market entirely.
>
> Or old 3Com Corebuilder/CellPlex 7000 gear, which shuts down with anything 
> but pure sinewave.
> ___
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] php53 and mcrypt

2011-04-15 Thread Geoff Galitz

More PHP fun!

I can see in the spec files that php-mcrypt support was removed by Redhat.  I 
tried to find out why but I don't have sufficient access to redhat bugzilla.  I 
am wondering if it is actually necessary as I have also run across a post or 
two that indicates applications that rely on mcrypt still work with the new 
php53.

Perhaps mcrypt was superceded by another module or PHP core code?

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] php53 and eacclerator

2011-04-15 Thread Geoff Galitz
> I uploaded the spec here:
> http://ubliga.de/php-eaccelerator.spec
> 
> It's adjusted for RHEL/Centos 5.6 so that it works with stock php53 
> packages - no need to pull in packages from other repos.
> 


Thanks!

 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 40TB File System Recommendations

2011-04-15 Thread Benjamin Franz
On 04/14/2011 09:00 PM, Christopher Chan wrote:
>
> Wanna try that again with 64MB of cache only and tell us whether there
> is a difference in performance?
>
> There is a reason why 3ware 85xx cards were complete rubbish when used
> for raid5 and which led to the 95xx/96xx series.
> _

I don't happen to have any systems I can test with the 1.5TB drives 
without controller cache right now, but I have a system with some old 
500GB drives  (which are about half as fast as the 1.5TB drives in 
individual sustained I/O throughput) attached directly to onboard SATA 
ports in a 8 x RAID6 with *no* controller cache at all. The machine has 
16GB of RAM and bonnie++ therefore used 32GB of data for the test.

Version  1.96   --Sequential Output-- --Sequential Input- 
--Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  
/sec %CP
pbox332160M   389  98 76709  22 91071  26  2209  95 264892  26 
590.5  11
Latency 24190us1244ms1580ms   60411us   69901us   
42586us
Version  1.96   --Sequential Create-- Random 
Create
pbox3   -Create-- --Read--- -Delete-- -Create-- --Read--- 
-Delete--
   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  
/sec %CP
  16 10910  31 + +++ + +++ 29293  80 + +++ 
+ +++
Latency   775us 610us 979us 740us 370us 
380us

Given that the underlaying drives are effectively something like half as 
fast as the drives in the other test, the results are quite comparable.

Cache doesn't make a lot of difference when you quickly write a lot more 
data than the cache can hold. The limiting factor becomes the slowest 
component - usually the drives themselves. Cache isn't magic performance 
pixie dust. It helps in certain use cases and is nearly irrelevant in 
others.

-- 
Benjamin Franz
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] php53 and MSSQL

2011-04-15 Thread John Beranek
[Reposted now I've joined the list, so I hopefully don't get moderated out]

Hi,

I've upgraded lots of machines to 5.6 (thanks!) and there was one
particular machine that I'd also like to upgrade to PHP 5.3.
Unfortunately it seems I can't.

On the machine I have php-mssql installed, and it appears that there is
no php53-mssql.

php-mssql is built from the php-extras SRPM, so is there going to be a
php53-extras SRPM?

I've checked upstream, and they also don't have a php53-mssql package,
so if there _were_ to be solved it'd have to be in the 'Extras'
repository I guess...

Cheers,

John.

-- 
John Beranek To generalise is to be an idiot.
http://redux.org.uk/ -- William Blake

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOs 5.6 and Time Sync

2011-04-15 Thread Johnny Hughes
On 04/14/2011 06:23 AM, Mailing List wrote:
> On 4/14/2011 6:47 AM, Johnny Hughes wrote:
>>
>> Is it really true that the time is working perfectly with one of the
>> other kernels (the older ones)?
>>
>>
>>
> 
>Johnny,
> 
>   Yes, As long as I run the older 5.5 kernel my time is perfect. All
> clients can get from this machine with no issues. As soon as I run new
> kernel, or Plus kernel for that matter. The time goes downhill. "Uphill
> actually"
>   
> To answer the previous question I do have the HW clock set to utc,
> Everything is stock from initial install of the package.
> 
> Brian.

I do not see anything from Dell that is a model C151.

I also do not see anything in the RH bugzilla that is problematic for
older AMD processors and the clock, unless running KVM type virtual
machines.

Is this a VM or regular install?

If this a real machine, do you have the latest BIOS from Dell?

Do you have any special kernel options in grub?



signature.asc
Description: OpenPGP digital signature
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Ross Walker
On Apr 15, 2011, at 4:48 AM, Rudi Ahlers  wrote:

> 
> 
> On Fri, Apr 15, 2011 at 10:40 AM, Ljubomir Ljubojevic  wrote:
> John R Pierce wrote:
> > On 04/14/11 5:43 PM, Christopher Chan wrote:
> >> On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
> >>> On 04/14/11 7:44 AM, Christopher Chan wrote:
>  Now, if OpenIndiana resists using illumos...
> >>> openindiana is under the Illumos project umbrella.  They aren't going to
> >>> use anything else.
> >> Eh? I was under the impression that they are separate and that Garrett
> >> Damore was rather unhappy with the initial direction of OpenIndiana in
> >> not preparing for an illumos release. 148 is still not illumos as far as
> >> I know.
> >
> > afaik, both are still using pretty much the last opensolaris kernel with
> > minor changes
> >
> >
> > I was going on this, which says OpenIndiana is a member of the Illumos
> > Foundation, that Illumos was providing the core/kernel, and OpenIndiana
> > is integrating it into a complete system aka distribution
> > http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F
> >
> > They go onto say they are waiting for Illumos to mature before they
> > integrate it.
> 
> Eham..., CentOS mailinglist maybe to continue in private?
> 
> Ljubomir
> 
> 
> Eham., many people are learning a lot more from this thread than a lot of 
> the other threads in the past few days. let them continue, and don't 
> subscribe to the tread :)

I agree with both assessments, but since this is a CentOS list and this thread 
has now twisted into ZFS advocacy I must say as well, continue off list.

-Ross

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Christopher Chan
On Friday, April 15, 2011 03:59 PM, John R Pierce wrote:
> On 04/14/11 5:43 PM, Christopher Chan wrote:
>> On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
>>> On 04/14/11 7:44 AM, Christopher Chan wrote:
 Now, if OpenIndiana resists using illumos...
>>> openindiana is under the Illumos project umbrella.  They aren't going to
>>> use anything else.
>> Eh? I was under the impression that they are separate and that Garrett
>> Damore was rather unhappy with the initial direction of OpenIndiana in
>> not preparing for an illumos release. 148 is still not illumos as far as
>> I know.
>
> afaik, both are still using pretty much the last opensolaris kernel with
> minor changes
>

or nice big changes from the standpoint of those who were pining for 
openindiana with b134+patches


>
> I was going on this, which says OpenIndiana is a member of the Illumos
> Foundation, that Illumos was providing the core/kernel, and OpenIndiana
> is integrating it into a complete system aka distribution
> http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F

oh i see.

>
> They go onto say they are waiting for Illumos to mature before they
> integrate it.

Yes...like getting g11n in. I guess traction is there already. 
OpenIndiana will be moving to illumos so i guess it would be the one to 
use if one wants a sun cc compiled and sun linked distro.

It's going to be interesting to see how all these different projects 
including CentOS play out.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS on SSDs...

2011-04-15 Thread John Doe
Thanks to all for the info.

Guess I will either keep CentOS 5 and have to compile my own kernel for the 
discard option; or wait for CentOS 6...

Thx,
JD
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Rudi Ahlers
On Fri, Apr 15, 2011 at 10:40 AM, Ljubomir Ljubojevic wrote:

> John R Pierce wrote:
> > On 04/14/11 5:43 PM, Christopher Chan wrote:
> >> On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
> >>> On 04/14/11 7:44 AM, Christopher Chan wrote:
>  Now, if OpenIndiana resists using illumos...
> >>> openindiana is under the Illumos project umbrella.  They aren't going
> to
> >>> use anything else.
> >> Eh? I was under the impression that they are separate and that Garrett
> >> Damore was rather unhappy with the initial direction of OpenIndiana in
> >> not preparing for an illumos release. 148 is still not illumos as far as
> >> I know.
> >
> > afaik, both are still using pretty much the last opensolaris kernel with
> > minor changes
> >
> >
> > I was going on this, which says OpenIndiana is a member of the Illumos
> > Foundation, that Illumos was providing the core/kernel, and OpenIndiana
> > is integrating it into a complete system aka distribution
> >
> http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F
> >
> > They go onto say they are waiting for Illumos to mature before they
> > integrate it.
>
> Eham..., CentOS mailinglist maybe to continue in private?
>
> Ljubomir
>


Eham., many people are learning a lot more from this thread than a lot
of the other threads in the past few days. let them continue, and don't
subscribe to the tread :)


-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] speed-tuning samba?

2011-04-15 Thread Andrzej Szymański
On 2011-04-14 20:16, Les Mikesell wrote:
> One thing in particular that I'd like to make faster is access to a set
> of libraries (boost, etc.) that are in a directory mapped by several
> windows boxes (mostly VM's on different machines)used as build servers.
>

I usually run samba with defaults, as playing with the settings did not 
change much in my case. However, I found in one of the IBM redbooks 
(http://www.redbooks.ibm.com/redpapers/pdfs/redp4285.pdf on page 131) 
that disabling tcp sack and dsack is recommended on a samba box working 
on a gigabit LAN (when samba host and clients are in the same LAN).

In one case it helped much, in the other it did not change anything, so 
you should try on your own:
sysctl -w net.ipv4.tcp_sack=0
sysctl -w net.ipv4.tcp_dsack=0

Andrzej
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread Ljubomir Ljubojevic
John R Pierce wrote:
> On 04/14/11 5:43 PM, Christopher Chan wrote:
>> On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
>>> On 04/14/11 7:44 AM, Christopher Chan wrote:
 Now, if OpenIndiana resists using illumos...
>>> openindiana is under the Illumos project umbrella.  They aren't going to
>>> use anything else.
>> Eh? I was under the impression that they are separate and that Garrett
>> Damore was rather unhappy with the initial direction of OpenIndiana in
>> not preparing for an illumos release. 148 is still not illumos as far as
>> I know.
> 
> afaik, both are still using pretty much the last opensolaris kernel with 
> minor changes
> 
> 
> I was going on this, which says OpenIndiana is a member of the Illumos 
> Foundation, that Illumos was providing the core/kernel, and OpenIndiana 
> is integrating it into a complete system aka distribution
> http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F
> 
> They go onto say they are waiting for Illumos to mature before they 
> integrate it.

Eham..., CentOS mailinglist maybe to continue in private?

Ljubomir
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] cents 5.6 ..... futur

2011-04-15 Thread Ljubomir Ljubojevic
Michel Donais wrote:
> To pass from Centos 5.5 to 5.6 it was easy as an upgrade.
>  
> Will it be the same from 5.6 to 6.0 or a full install will be better.
>  
> ---
There is so big difference between them (base packages, package and 
system design, dependencies) that full install will be necessary, not 
only recommended. I think upgrade might be even impossible.

Ljubomir
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] ups advice

2011-04-15 Thread admin lewis
2011/4/14 John R Pierce :
> On 04/14/11 9:06 AM, admin lewis wrote:
>> Hi
>> I have a Dell PowerEdge T310 *tower* server.. I have to buy an ups by
>> apc... anyone could help me giving an hint ?
>> a simple "smart ups 1000" could be enough ?
>>
>>
>
> apc smartups or eaton powerware woudl be my choices.    1000VA should be
> fine.
>
> avoid consumer UPS's like apc backups, they are junk.
>
>
> how long do you need the system to stay powered when the power fails?
> just long enough to shutdown?  or do you need it to stay up for some
> period of time?
>
>

Few minutes... 10 minutes should be enough.. and then shutdown the machine ..
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expanding RAID 10 array, WAS: 40TB File System Recommendations

2011-04-15 Thread John R Pierce
On 04/14/11 5:43 PM, Christopher Chan wrote:
> On Friday, April 15, 2011 02:46 AM, John R Pierce wrote:
>> On 04/14/11 7:44 AM, Christopher Chan wrote:
>>> Now, if OpenIndiana resists using illumos...
>> openindiana is under the Illumos project umbrella.  They aren't going to
>> use anything else.
> Eh? I was under the impression that they are separate and that Garrett
> Damore was rather unhappy with the initial direction of OpenIndiana in
> not preparing for an illumos release. 148 is still not illumos as far as
> I know.

afaik, both are still using pretty much the last opensolaris kernel with 
minor changes


I was going on this, which says OpenIndiana is a member of the Illumos 
Foundation, that Illumos was providing the core/kernel, and OpenIndiana 
is integrating it into a complete system aka distribution
http://wiki.openindiana.org/oi/Frequently+Asked+Questions#FrequentlyAskedQuestions-WhatistherelationshipbetweenOpenIndianaandIllumos%3F

They go onto say they are waiting for Illumos to mature before they 
integrate it.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos