[CentOS-virt] updating for XSA-108 -

2014-09-24 Thread Luke S. Crawford
So... it is theorized that XSA-108 is why amazon is rebooting.   Is 
there any way for me to know when this update hits CentOS5 or 
xen4centos6?Do we know that it is *not* included in one of the 
centos patches?


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] Universal server hardware platform - which to choose?

2012-06-28 Thread Luke S. Crawford
On Tue, Jun 26, 2012 at 03:03:23PM -0400, Steve Thompson wrote:
 On Tue, 26 Jun 2012, m.r...@5-cent.us wrote:
 
  We've had a number of servers fail, and it *seems* to be related to the
  motherboard.
 
 I too have had bad experiences with SuperMicro motherboards; never had one
 last more than three years.

The problem with supermicro is that the end user assembles them; 
If you use ESD protection, this is fine.   If you dont?  go buy a dell
or something.

The big problem is that many of the smaller assembly houses also
don't believe ESD is a big deal.  If there is carpet on the workshop
floor?  run.  If you see techs working without a wrist strap? walk.  

I've assembled hundreds of supermicro servers with and without ESD
protection, and the behavior is fairly reproducable.   Yeah, the
problems don't always show up right away?  but they come. 

I remember when I first figured this out;  we had been having about
1 in 3 of our supermicro servers not pass burn-in.   Then, in production,
we'd lose things like RAID cards and ethernet ports all the time. I'd 
spend days swapping out parts and RMAing stuff, just to get one server
built.   I mean, I didn't really believe that the factory was sending 
me broken shit, and there was noticable static in the office.  (I 
always 'took the power supply pledge' before touching anything)
Anyhow, I read a study by adaptec (we were using adaptec hardware 
raid in everything, and they were failing like crazy)   saying that 
nearly all customer RMAs, upon inspection, were due to esd damage.   

Well, the boss ended up ordering something like 70 servers (rather than 
the three every two weeks he was ordering before)  -  I talked him into
letting me blow $200 on ESD protection, just to see if that was 
the problem, and instead of having 1 out of 3 die as before?  all of them
passed burn-in on the first try.   

Properly assembled supermicro kit (both AMD and Intel) is just
as good as the dell stuff.  I have one server that's been chugging away
for something like ten years now.  (I need to get rid of it;   Dual
socket 604 xeons.  It's a space heater, and it doesn't get me much by way
of compute power.  I've got all customers off of it, but my own personal
vps?  I haven't had time.)  

But yeah, you've gotta get someone to assemble it that gives a shit.  
I mean, me?  I know that it's my pager that is going off at 4am if
something breaks.  It's me that's going to have to fumble around with
spares.  I give a shit.   

As it is, I'd rather assemble my own servers, than trust someone for
whom a down hardware is not that big of a deal to assemble my stuff.

Assembling a superserver, if you don't fuck it up, takes about five 
minutes.   Burn in is trivial when they pass... and when they don't 
pass, which is extremely rare, I know I screwed something up. 


On the other hand... I have a very low opinion of dell support
(granted, I'm pretty hard to please in that department.)  but
from what I've seen? all the big names ship okay stuff from the factory.  
They have proper esd precautions in the factory.  So yeah; if you
aren't willing to go with the table mat, the wrist strap, 
and the monitor, well, order the server from dell and don't open it. 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Universal server hardware platform - which to choose?

2012-06-28 Thread Luke S. Crawford
On Thu, Jun 28, 2012 at 09:57:33PM -0700, John R Pierce wrote:
 On 06/28/12 8:56 PM, Luke S. Crawford wrote:
  The problem with supermicro is that the end user assembles them;
  If you use ESD protection, this is fine.   If you dont?  go buy a dell
  or something.
 
 
 well, the SM kit I've bought was built and integrated by a major name 
 systems integrator.   they were sold as complete solutions under this 
 vendors' label, and supported by said vendor.
 
 really, I'd say its all in the VAR and your service contract with them. 
 very few VARs do the level of systems testing that HP or IBM or Dell or 
 whatever do...  If you really really want to be your own systems 
 integrator, then do extensive burnin on new systems, and stock spare parts.

I agree.  Except that you don't need to do all, or even most of the
work that a systems integrator does. For me, the hard part of being a 
systems integrator is the sales and negotiation bullshit.  That's why
I don't build systems for other people.   On top of that, you have 
to deal with your customers opening them up, without ESD protection, 
and adding garbage, or customers blaming OS bugs on you.  If you only 
build for yourself, you don't have to worry about that sort of thing.

I mean,you still have to figure out if it's hardware or the OS, but
at least you get to choose the OS.

But yes.  stock spares.   I try to make sure I always have one server
(minus disks)  ready to go;  If I get a hardware problem (I can 
usually tell remotely)  I put it in the van before I head down
to the data center;  If I can't figure things out quickly on-site,
I take the hard drives out of the bad hardware, put them in the 
spare box, boot, and go.  (Of course, I also have spares of other parts;
but if something in production is down, you don't want to sit there
farting around trying to figure out which DIMM is bad while 
the pager is exploding.  Swap the whole thing and screw with it
back at the shop after you have cleaned up the support queue.)   

(if you use hardware raid, this becomes... more complicated.  
Test your procedure first.)  

From what I've seen?  the difference between no negotiation and 
the best possible negotiation, when you buy whole servers?  is often
50% of the total price.  Sometimes more.  When buying parts? it's 5%, 
if that.   (we're talking in the 1-5 server quantity here. I'm sure
things change if you are buying hundreds or thousands at once and you
are saavy.  I've never seen a saavy entity negotiate for hundreds
or thousands of servers or parts for same.)  

That, and to negotiate well, you need to have all of the knowledge you'd
need to buy the parts to build your own server.   Either way,
unless you are prepared to just pay full price, you need to keep up
with hardware and the relitive costs. 

Heck, I'll do all the assembly and burn in work, and keep spares 
around, just to avoid the negotiation bullshit.  For me? it's far easier.
And if you ask me?  dealing with broken hardware is downright relaxing
compared with trying to convince some goddamn monkey that the 
reboot that happened last night was really a hardware issue, and yes, it 
came back up, but it still needs to get fixed.   But it works now, right?
(sorry... I just remember some extremely frustrating experiences dealing
with dell's verson of Mordak.  And I was getting paid by the hour, so if
corporations had feelings, the company hiring me would really have felt 
worse.) 

But that has as much to do with who I am and what skills I have as
anything else.   If I were an extrovert, I'd probably find 'educating'
tech support to be less of a hellish experience. 

And, of course, on all but the super expensive plans, if it's not
acceptable to be down all weekend for a hardware failure on friday
night, well, you still need those spares.   

(Of course, if I only had one or two servers, it'd probably make sense
to just pay twice the price and be done with it.  But nearly all of my
net worth is tied up in server hardware, so I can't walk away from that
50%.) 


But yeah, my point is just that if you build the hardware yourself, you
only have to do a small subset of the 'systems intigrator' work. 
Yeah, it's a lot more technical work than just firing the money
cannon at dell or HP, but it's a lot less social work than trying
to get a reasonable deal, or trying to get reasonable service
out of dell or HP.   
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAID?

2012-06-25 Thread Luke S. Crawford
On Tue, Jun 26, 2012 at 03:10:30AM +0800, Emmanuel Noobadmin wrote:
 On 6/25/12, Warren Young war...@etr-usa.com wrote:
  Then there's the LVM option, but I can't immediately come up with a
  one-liner that tells you whether a given LVM disk set is equivalent to
  software RAID.
 
 LVM has a mirroring option but from, possibly outdated, reading a
 couple of years back, it is not as smart as md raid when it comes to
 using both disks to speed up reading and will only read from one disk.


Also, nobody uses it.  If you hit a problem, you are completely on your
own.   I thought it would be more simple to have only one abstraction
layer;  Nope. Use md and lvm on top of the md. 
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] XEN or KVM - performance/stability/security?

2012-05-16 Thread Luke S. Crawford
On Fri, May 11, 2012 at 03:46:43PM -0700, Gordon Messmer wrote:
 A late reply, but hopefully a useful set of feedback for the archives:
 
 On 04/20/2012 05:59 AM, Rafał Radecki wrote:
  Key factors from my opint of view are:
  - stability (which one runs more smoothly on CentOS?)
 
 I found that xenconsoled could frequently crash in Xen dom0, and that 
 guests would be unable to reboot until it was fixed.  I also found that 
 paravirt CentOS domUs would not boot if they were updated before the 
 dom0.  In short, Xen paravirt was very fragile and troublesome.  I never 
 tested Xen with hardware virtualization.

This particular problem was fixed some time ago, it hasn't happened
to my (many) dom0s in more than a year.

The RHEL5 Xen dom0 was garbage until 5.3 or so.  To the point where I'd
compile my own and deal with the pain of using a non-rhel kernel with
a rehl userland.

Stability has improved vastly.

  - performance (XEN PV/HVM(with or without pv drivers) vs KVM HVM(with or
  without pv drivers))
 
 PV drivers will make some difference, but the biggest performance 
 difference you'll see is probably the difference between file-backed VMs 
 and LVM-backed VMs.  File-backed VMs are extremely slow.  Whichever 
 system you choose, use LVMs as the backing for your guests.

My experience has been that using qemu for disk has something of a 
multiplier effect;  e.g. it makes slow spinning disk noticably 
slower.  The paravirtualized drivers help immensely in that regard.

(how are the paravirt drivers in KVM these days?  I have a server 
full of kvm guests running some ancient version of ubuntu I will be
moving to RHEL6 shortly.)  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] RAID-10 vs Nested (RAID-0 on 2x RAID-1s)

2012-03-29 Thread Luke S. Crawford
On Thu, Mar 29, 2012 at 04:49:26PM -0500, Tim Nelson wrote:
 Am I overthinking this? Does the kernel handle the mirror/stripe 
 configuration under the hood, simply presenting me with a magical RAID10 
 array? Or, is this something different and I really should be performing the 
 RAID creation manually as noted in option #1?

I used to do something very similar to option 1, save that I used LVM to 
do the striping.  I now use the md raid10 array.  Rebuilds are dramatically
faster under the 'just let md handle the raid10' option.   
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] md raid 10

2012-03-08 Thread Luke S. Crawford
On Thu, Mar 08, 2012 at 02:51:58PM -0500, m.r...@5-cent.us wrote:
 John R Pierce wrote:
  On 03/08/12 6:33 AM, m.r...@5-cent.us wrote:
ok, so 3 x 48/64 core servers uses the same power as 6 x 4/8 core ?
thats still major win.
  Um, no - that's what I'm saying is*not*  the case. The new suckers drink
  power - using a UPS that I could hang, say, 6 Dell 1950's off of,*if*
  I'm lucky, I can put three of the new servers. And at that, if a big jobs
  running (they very much vary in how much power they draw, depending on
  usage), even with only three on, I've seen the leds run up to where
  they're blinking, indicating it's near overload, over 90% capability.
 
  ok, how do you figure 3 48 core modern servers are not more powerful
  computationally than 6 8 core servers?   the 1950's were cloverton
  which were dual core2duo chips, 2 sockets, at ~ 2-3GHz, for your 8 cores
  per 1U.
 
 I'm sorry, but to me, the above is a non sequitur. I was talking about how
 much power the servers drink, and that the UPSs that I have can barely,
 barely handle half as many or less, and I'm running out of UPSs, and out
 of power outlets for them in such a small space (that is, a dozen or so in
 each rack), without trying to go halfway across the room.

If you need lots of smaller servers, supermicro makes a very nice single 
socket amd G34 board:
http://www.supermicro.com/Aplus/motherboard/Opteron6000/SR56x0/H8SGL-F.cfm

I have a bunch of those in production, and they work well. Most of mine only
have 32GiB ram;  I bought them back when 8GiB modules were expensive;  but 
if I bought one today, they'd have 64GiB, as 8GiB reg. ecc ddr3 is cheap
now.  They use more than half what a dual G34 board with double the ram/cpu 
would use, but not a lot more than half.  

One of those single-socket G34 boards should use rather less power
than a dual-socket 1950 with FBDIMMs and it should give you rather more
compute power and ram. (ugh.  as someone that uses a lot of ram and pays a 
lot for power, I hate FBDIMMs.  I was almost entirely AMD socket F during 
that time period for the reg.ecc ddr2.  all my new stuff is intel 56xx 
with reg. ecc ddr3.) 

Also, they make lower power G34 CPUs... they cost a bit more, but when 
you are paying California prices for power, it's usually worth it, especially
if you plan on keeping the thing for 5 years rather than just 3. 

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6.2 software raid 10 with LVM - Need help with degraded drive and only one MBR

2012-03-05 Thread Luke S. Crawford
On Mon, Mar 05, 2012 at 06:12:52PM -0500, Ross Walker wrote:
 Technically if the data portion is a true RAID10 you would only need to 
 mirror /boot to sdb, cause if both sda AND sdb are out then the whole RAID10 
 is SOL and there would be no need to boot off of sdc or sdd.

 Having said that though it's just easier to create a 4 disk raid1 of /boot 
 and duplicate the MBR across all of them.

I'm using the linux 'raid10' md type.  It actually allows an odd number of 
disks;  it just guarantees that each chunk of data is on at least two drives;

So yeah, rather than trying to guess on which chunk is where, I think a 
mirrored /boot is the easy way out.  

I did used to create mirror sets and stripe across them using LVM stripeing.
(Before that I actually used LVM mirroring, but it seems that nobody uses 
LVM mirrororing)

Anyhow, my ancidotal experience is that the linux md 'raid10' option
results in an array that rebuilds like twice as fast as two mirrors
that you stripe across with LVM.  

But yeah, either way, md0 was a small mirror across all drives.   
(and remember to load the bootloader on all drives.  It's in my kickstart
so I can't forget on setup, but I still need to be careful when I replace
drives.)

 Sounds like the hosting provider isn't very Linux savvy. I would always 
 double check the setup of any system someone else installs for you.

As a general rule, if you want your hosting provider to support more
than just the hardware, you have to setup the software their way.  I mean, 
I'm guessing that you asked this hosting provider 'Hey, can you setup 
software raid for me?' and they did, even though they don't usually.  
I mean, if they setup software raid usually and still haven't solved this, 
then they are just plain incompitent;  I'm just saying, if this is an 
unusual setup for them, it might be the 'I haven't done this before' kind 
of incompitence, which we all suffer from time to time.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 6.2 software raid 10 with LVM - Need help with degraded drive and only one MBR

2012-03-04 Thread Luke S. Crawford
 Right.  I was referring to RAID 1.  For a RAID 10, you would have to
 find the proper drive to boot from.  This is why I tend to limit myself
 to RAID 1 in software.  If I need something more complex than that, I
 get a hardware card so the OS just sees it as a single drive and you
 don't have to worry about grub.

When I want to boot off of a raid 10, I first partition the drives
and make a small (like a gigabyte) partition 1, and put the rest of 
the space on partition 2.  I do this on all drives, then I create
a raid 1 of sd[abcd]1 for /, and a raid10 of sd[abcd]2 for everything 
else.   I've got this config on several tens of servers and it seems
to work okay.  

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Software RAID1 with CentOS-6.2

2012-03-01 Thread Luke S. Crawford
 A friend of mine has had a couple of strange problems with the RE (RAID) 
 series of Caviars, which utilize the same mechanics as the non-RE 
 Blacks.  For software RAID, I would recommend that you stick with the 
 non-RE versions because of differences in the firmware.

I would recommend the opposite.  If you use a RAID at all, the 'raid 
edition' or 'enterprise' or whatever drives give you significantly
better uptime.  (Now, is that worth the additional cost? that depends
on your application.   They are rather more expensive, and for some
applications, a few hours of downtime every few years isn't a big deal,
in which case, by all means use the WD Black drives; you save a bunch 
of money, and they are about as fast.  I use consumer drives in my own
desktops, because nobody but me cares if the thing is working, and I only
care when I'm physically nearby and can fix the damn thing if it's broken.)

I have tried /very/ hard to get some sort of consumer-grade drive working
in a raid, hardware or software.  I've tried on the order of 50 drives, 
many of them WD black, and many of those with the WDTLER.exe hack set.  
I still have a few of these in production, but whenever I get half a 
chance, I swap out the consumer-grade drives for enterprise or raid-edition 
drives, even at today's ridiculous drive prices.   I RMA the bad
consumer drives and have been giving them away to friends and people
that have done me favors.  (If anyone wants to arrange a 'many consumer 
drives for few enterprise drives' swap, let me know.) 

The WD blacks mostly work, and they perform well.The problem is that 
they don't fail reliably.  

The other problem I notice is that it's very rare to see a system with 
1 raid edition drive that is significantly slower than others of the 
same model in the same system.  It's fairly common to see this with 
consumer drives.

I mean, everything should be RAID.  soft/hard/whatever- spinning
rust without redundancy is just a bad idea, unless you are using it as
'really slow ram' kind of scratch space, and even then, redundancy is
a good idea unless you have auto provisioning down so well that rebuilding
a box is no additional work.  My auto provisioning is not that good yet,
so I RAID even data I don't care about, because I don't want to have
to bother rebuilding the system when the drive fails.  So when a drive 
has problems, I want it to fail and let the RAID system handle it.  

The problem with consumer grade drives (and I've seen this with WD Black
consumer drives, both with WDTLER.EXE and without) is that often, before
they fail?  they get really, really slow, sometimes to the point of 
completely or nearly completely hanging the raid.

To be clear, similar things occasionally happen with 'enterprise' or 'raid 
edition' drives too, but it's very rare and usually not as bad.  I can 
count the number of times this has happened to me ever without taking off
my shoes.  Just yesterday I had a WD RE4 500gb drive (a fairly nice 
'enterprise' drive-  most of my current drives are similar)  fail in 
such a way that my 4 disk raid10 dropped from it's normal 100+MB/sec 
sequential throughput down below 30Mb/sec, and latency went through the 
roof.   The box was nigh unusable until I failed that drive out.   But, 
it's very rare that with 'enterprise' drives the whole thing completely 
freezes up, and this way, at least I could log in, figure out what was
up and fail the bad drive.   

With consumer grade drives (Including the wd blacks with WDTLER)  it's
pretty common for the whole raid to completely freeze.

It isn't a problem with software raid alone;  I've seen it with 3ware and
LSI RAID cards, too.  (In one test with a known bad but not reporting it 
drive, the 3ware didn't freeze up quite as hard.  It kept 'retrying' the 
bad disk, then you had a second or two of access to the RAID, then it went
back to 'retrying' and the RAID was frozen for a few seconds, etc...)

I mean, my total fleet is under a thousand drives, and I don't have
a good inventory system tracking errors, but I only have a few consumer-grade
drives left in production, but it's still fairly common for a bad consumer
drive to hang up an old server and set off my pager in the middle of the 
night, 'cause I/O has hung on one of the servers.   When a raid 
edition/enterprise drive fails, I get an email, but I can deal with
that in the morning.  The RAID continues chugging along as long as
I have enough good drives left.


-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Software RAID1 with CentOS-6.2

2012-02-28 Thread Luke S. Crawford
On Wed, Feb 29, 2012 at 11:27:53AM +1100, Kahlil Hodgson wrote:
 Now I start to get I/O errors on printed on the console.  Run 'mdadm -D
 /dev/md1' and see the array is degraded and /dev/sdb2 has been marked as
 faulty.

what I/O errors?


 So I start again and repeat the install process very carefully.  This time I
 check the raid array straight after boot.
 
 mdadm -D /dev/md0   -   all is fine.
 mdadm -D /dev/md1   -   the two drives are resyncing.
 
 Okay, that is odd. The RAID1 array was created at the start of the install
 process, before any software was installed. Surely it should be in sync
 already?  Googled a bit and found a post were someone else had seen same thing
 happen.  The advice was to just wait until the drives sync so the 'blocks
 match exactly' but I'm not really happy with the explanation.  At this rate
 its going to take a whole day to do a single minimal install and I'm sure I
 would have heard others complaining about the process.

Yeah, it's normal for a raid1 to 'sync' when you first create it.
the odd part is the I/O errors. 

 Any ideas what is going on here? If its bad drives, I really need some
 confirmation independent of the software raid failing. I thought SMART or
 badblocks give me that. Perhaps it has nothing to do with the drives.  Could a
 problem with the mainboard or the memory cause this issue?  Is it a SATA3
 issue?  Should I try it on the 3Gb/s channels since there's probably little
 speed difference with non-SSDs? 

look up the drive errors.   

Oh, and my experience?  both wd and seagate won't complain if you
error on the side of 'when in doubt, return the drive'  - that's what I
do.   

But yeah, usually smart will report something... at least a high reallocated
sectors or something.


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Data consumption (external connections only)

2012-02-26 Thread Luke S. Crawford
On Sun, Feb 26, 2012 at 02:21:09PM -0600, Frank Cox wrote:
 It looks like it does pretty much the same thing as several other monitoring
 tools that I've looked at.  However, none of them separate local traffic from
 external traffic.

check out http://bandwidthd.sourceforge.net/  -  It only supports IPv4,
but it's pretty convenient, as you can define what 'local' and 'external'
means by IP address.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Data consumption (external connections only)

2012-02-26 Thread Luke S. Crawford
On Sun, Feb 26, 2012 at 11:30:14PM -0600, Frank Cox wrote:
 On Mon, 27 Feb 2012 00:26:59 -0500
 Luke S. Crawford wrote:
 
  check out http://bandwidthd.sourceforge.net/  -  It only supports IPv4,
  but it's pretty convenient, as you can define what 'local' and 'external'
  means by IP address.  
 
 Cool!  That looks like it could the the real McCoy.
 
 Now I'm off to play with this toy

It's pretty cool;  I used it for billing for a while.  The big problem is
that it doesn't support IPv6, and that's pretty much essential these days,
I mean, to your more technically savvy customers.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] System reboots automatically more or less every two days

2012-02-23 Thread Luke S. Crawford
On Thu, Feb 23, 2012 at 03:35:43PM +0100, fabio.pugna...@tiscali.it wrote:
 now every two days the system automatically reboots as you can see

You want to setup a serial console, and log it.  Usually when the system 
reboots or crashes, it will print something to console indicating what 
is happening.  It can be a great help with hardware problems.

-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] RHEL 5.5 Xen fixes

2010-03-31 Thread Luke S Crawford
compdoc comp...@hotrodpc.com writes:

 When Red Hat picks a release of kernel, xen, kvm, etc., they
 tweak, change, and test it until its 'enterprise' ready. If
 they say you can run a business class server with it, I
 believe them. Based solely on how well centos works.
 
 Even though RHEL 5.4 doesn't have the newest of anything, it
 does what it does well. Even if I have to put up with fewer
 features or fixes than the current version, I'm leaving it
 alone. 
 
 I'm sure there are guys that can install newer releases of
 everything needed to run the latest xen, and it might be
 stable, it might not. 

In my experience (several hundred RHEL xen boxes, and maybe 40
xen.org xen boxes over five years.)  until  recently, the xen.org 
xen kernel was quite a bit more stable than the RHEL xen kernel.   
(I mean, it didn't crash, but you'd have weird hangups with things
like xenconsoled dying, or a guest's network suddenly no longer
passing packets.)

The RHEL xen kernel has been getting markedly better over time, 
though, and the RHEL xen kernel has /much/ better driver support
than the xen.org 2.6.18 kernel.  My next server at prgmr.com
will likely have either the the CentOS xen kernel or the opensolaris
xvm xen kernel rather than the xen.org xen kernel for that reason 
(assuming I can get PVGRUB working and that paravirt ops linux guests 
work, which I believe they do.)  

-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-03 Thread Luke S Crawford
Grant McWilliams grantmasterfl...@gmail.com writes:
 So if I have 6 drives on my RAID controller which do I choose?


considering the port-cost of good raid cards, you could probably use md
and get 8 or 10 drives for the same money.   It's hard to beat more 
spindles for random access performance over a large dataset.  (of course, 
the power cost of another 2-4 drives is probably greater than that of a 
raid card)  
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-02 Thread Luke S Crawford
Ben M. cen...@rivint.com writes:

 Thanks. The portability bonus is a big one. Just two other questions I 
 think.
 
 - Raid1 entirely in dom0?

that's what I do.  I make one big md0 in the dom0, then partition that out 
with lvm.

 - Will RE type HDs be bad or good in this circumstance? I buy RE types 
 but have recently become aware of the possibility where TLER 
 (Time-Limited Error Recovery) can be an issue when run outside of a 
 Raid, e.g. alone on desktop machine.

generally speaking the re or enterprise drives are all I use
in production (all of production is software raid'd.)  otherwise,
even though you have a mirror, if one drive fails in a certain
way (happened to me twice before I switched to 'enterprise' drives)
the entire box hangs waiting on that one bad drive.  

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-02 Thread Luke S Crawford
Grant McWilliams grantmasterfl...@gmail.com writes:

 I don't use software RAID in any sort of production environment unless it's
 RAID 0 and I don't care about the data at all. I've also tested the speed
 between Hardware and Software RAID 5 and no matter how many CPUs you throw
 at it the hardware will win.  Even in the case when a 3ware RAID controller
 only has one drive plugged in it will beat a single drive plugged into the
 motherboard if applications are requesting dissimilar data. One stream from
 an MD0 RAID 0 will be as fast as one stream from a Hardware RAID 0. Multiple
 streams of dissimilar data will be much faster on the Hardware RAID
 controller due to controller caching.


Personally, I never touch raid5, but then, I'm on sata.   I do agree
that there are benifits to hardware raid with battery backed cache if
you do use raid5 (but I think raid5 is usually a mistake, unless it's
all read only, in which case you are better off using main memory for 
cache.  you are trading away small write performance to get space;  with 
disk, space is cheap and performance is expensive, so personally, if 
I'm going to trade I will trade in the other direction.)

However, with mirroring and striped mirrors (I mirror everything;  
even if I don't care about the data, mirrors save me time that is
worth more than the disk.)  my bonnie tests showed that md was faster
than a $400 pcie 3ware.  

As far as I can tell, there's not much advantage to hardware raid
on SATA;  if you want to spend more money, get SAS.  The spinning disks
are going to be the slowest thing in your storage system by far.  

battery backed cache is cool for writes,  but most raid controllers have 
such puny caches, it doesn't really help much at all except in the
case of small writes to raid5.
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid

2009-12-02 Thread Luke S Crawford
Grant McWilliams grantmasterfl...@gmail.com writes:

 Interesting thoughts on raid5 although I doubt many would agree. I don't see
 how the drive
 type has ANYTHING to do with the RAID level. 

raid5 tends to suck on small random writes;   SATA sucks on small
random anything, so your worst-case (and with my use case,  and most
'virtualization' use cases, I you spend almost all your time in the 
worst-case)  is much worse than raid5 on SAS, which has reasonable 
random performance. 

The other reason why I think SATA vs SAS matters when thinking about 
your raid card is that the port-cost on a good raid card is so 
high that you could almost double your spindles at sata prices,
and without battery backed cache, the suckage of RAID5 is magnified,
so soft-raid5 is generally a bad idea. 

 There are different RAID levels
 for different situations
 I guess but a RAID 10  (or 0+1) will never reach the write or read
 performancehttp://www.tomshardware.com/reviews/external-raid-storage,1922-9.htmlof
 a RAID-5.

I understand raid5 is great for sequential, but I think
my initial OS install is the last sequential write any of my servers
ever see.  My use-case, you see, is putting 32GiB worth of VPSs on one mirror
(or a stripe of two mirrors)  It is all very random, just 'cause you have
30+ VMs on a box.  

  If you do lots of sequential stuff, you will see very different
results, but the vitalization use case is generally pretty random, because
multiple VMs, even if they are each writing or reading sequentially, 
make for random disk access. 

(now, why do my customers tolerate this slow sata disk?  because it's 
cheap.  Also, I understand my competitors use similar configurations.)  

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Xen vs. iSCSI

2009-06-16 Thread Luke S Crawford
Bill McGonigle b...@bfccomputing.com writes:

 On 06/15/2009 11:33 PM, Luke S Crawford wrote:
  xm sched-credit -d 0 6
 
 Ah, ha!  This appears to work.  I didn't need to reserve a CPU for the 
 dom0 (knock on wood).  Much obliged, Luke.
 
 I'm academically curious, though - I seem to have created a CPU deadlock 
 of some sort, yet in 'xm top' none of the CPU's were pegged.  I've got 
 no reason to not give dom0 utmost priority - that makes perfect sense to 
 me - but I'm surprised the Xen scheduler would allow me to get into this 
 situation by default.


My understanding of this is entirely janitor-level, but I believe what you
are seeing is that the dom0 has exhausted it's 'credits'  and so if a
DomU wants the CPU the dom0 gets kicked off the cpu, waits a timeslice
(I think timeslices are on the order of tens of millaseconds...  I've
read 60ms, which is quite a long time in terms of sending a packet to a
nearby storage box.)  then gets back on the CPU.  

This is why I'm always loath to give more than 1 or 2 vcpus to my DomUs,
and why I always reserve cpu0 for the dom0;   that way, the domu
can pass a packet to the dom0 which can process it and send it out
without waiting.   the domU and the Dom0 can run at the same time.

If you look (xm sched-credit -d 0)  you will see that xen assigns
all DomUs a default priority of 256.   It does not assign a higher priority
to the Dom0.  I assume this is because the xen people very much have
an attitude of 'well, I wrote this nice hypervisor.  You set it up.'  
Which is fine with me, as while I can set it up, there's no way I could
have written the nice hypervisor.  But yeah, I see no reason not to 
default to giving the dom0 as much cpu as it wants;  if the dom0 is 
unhappy, everyone is unhappy.

It seems like the sort of thing RHEL could do.   (well, that and 
increasing the default dom0-min-mem to something that doesn't
crash the dom0.)  

The 'xm sched-credit -d 0 6' line is in the /etc/rc.local
of all the Xen hosts I administer.  It helps a lot, even when you
use local disk. 

I have seen this problem without using iscsi, when the DomUs are heavily
loaded.   I get 'stutter' on the command line and dropped packets 
on the interface counters.   It's irritating, because without iscsi, the 
problem is usually rare and difficult to reproduce. 
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Xen vs. iSCSI

2009-06-15 Thread Luke S Crawford
Bill McGonigle b...@bfccomputing.com writes:
 In the DomU, I'll see a lock-up, and then filesystem errors.  e.g.:
 
Installing : kernel [ 
 ]  1/33EXT3-fs error (device xvda1) in ext3_ordered_writepage: IO failure
 
 In the Dom0, I'll see:
 
sd 6:0:0:0: timing out command, waited 360s
sd 6:0:0:0: SCSI error: return code = 0x0605
end_request: I/O error, dev sdc, sector 37319



try 

xm sched-credit -d 0 6

and see if you still have the problem.

if that doesn't work, try

xm vcpu-set 0 1

then edit all your domu config files and add:
cpus=1-7

(assuming a dual socket quad core box... the idea is to reserve a cpu
for the dom0)  



the idea is that if your dom0 is starved for CPU, you get all sorts of
network and other I/O weirdness.   
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Bad Disk Performance of domU

2009-05-19 Thread Luke S Crawford
Francisco Pérez fpere...@gmail.com writes:

  Hi list.
 
 Im Running xen on a brand new dell PE 1950 with 146 GB SAS Disk on raid1.
 The performance on the I/O on the domU is really poor, but in dom0 the
 performance is great.


Are you using HVM?  
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Bad Disk Performance of domU

2009-05-19 Thread Luke S Crawford
Francisco Pérez fpere...@gmail.com writes:

 No, the domu's are para-virtualized guest.

Are you using file:// devices?   

I ask because I use lvm-backed phy:// devices and I get near native
disk performance.   Attach the config file for the domain
(usually /etc/xen/domainname)  

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] [OT] Godaddy hell...

2009-04-03 Thread Luke S Crawford
Jason Pyeron jpye...@pdinc.us writes:


  Everyone's pushing you to one of the VPS providers because 
  that's what all the cool kids are doing now that VM 
  technology is commoditized.
  
 
 I do not have an opinion on this.

I think people are pushing the VPS service because people who are interested
in administering the OS are likely to want the provider to handle the
hardware and network, but let them handle the Linux stuff.  this is 
where the responsibility is split on a VPS.

And it is a very clean and clear line, which I like.  I am responsible for 
the [virtual] hardware and network, you are responsible for the Linux bits.
Very clear.  when you are down, you know who is responsible.  

(I am a vps provider, but I am not what you want.  I do have
a SLA on the hardware/network, but I don't even have a login account
on your VPS.   I like it.  I get to play with the bits I like. )  

another note:  I would focus less on SLA and more on how often they are
down (and how open they are about downtime.  Hiding downtime is a very
bad sign.)  Does a free month really make up for any significant amount of 
downtime?   

If you want that line to be between your code and whatever language/framework 
you wrote your app in, you need to get specialized hosting for that 
language/app.  

You are probably going to pay more for this than for more 'generic' hosting
or even than for VPS hosting, but if you aren't good at or don't like being a
SysAdmin, well, it's probably worth it.  

Personally, I think you should ask on a mailing list for whatever your
webapp is written in.  I know there are several hosting companies who
specialize in doing what you want for ruby on rails, and thousands who
specialize in doing this for PHP.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Network switches

2009-03-25 Thread Luke S Crawford
Les Mikesell lesmikes...@gmail.com writes:
 If you get a service contract on any piece of Cisco equipment, you 
 typically get download access to all of the firmware updates.  

Yeah, but the problem for me is that for my frontend network, 100M is just
fine.  A used cisco 3548 is going to set me back around $200.  For my frontend,
it looks like a fine switch (my only question is... will it handle IPv6?   
it does vlan tunneling so worst case I use a linux box to route my IPv6.)  
Getting access to firmware updates is 5x that, every year.

I've had an ancient cat 2924 at a backup location online for several years
now.  No problems, it pushes packets at 100M just fine, it's span capabilities
even work.  I've gotten lucky as far as security goes.  But it doesn't really
make sense to replace it with a better switch.  the upstream switch 
above it is a SMC of similar age.  

 in a lot of scenarios there are several choices, each with a different 
 set of bugs that you won't know about unless you open a TAC case and 
 tell an engineer exactly what features have to work for you.

Yeah, but at the used prices for 100M kit, I can buy two or three, and test
it out to my heart's content.   I mean, my experience with support
(working for clients who can afford such things)  is that you have to 
understand the problem to get someone else to fix it anyhow, and usually 
understanding the problem is the hard part.  Once you understand the problem,
fixing it is trivial.  So I don't usually think it makes sense to pay for
support, especially when the equipment cost is such that I have a few spares
laying about in the lab.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] [OT] Network switches

2009-03-24 Thread Luke S Crawford
Rob Townley rob.town...@gmail.com writes:

 i would like to see real performance data via something like netperf
 with client machines booted from a standardized LiveCD, then
 peformance under their Linux Distribution and performance under
 Windows.


Performance data is not the most important metric, at least for me.  

For me, the big problem is reliability and security.   My problem with 
used cisco is that getting access to the firmware usually costs more than
the used parts I'm buying... If I'm going to use the thing as a router at the
head of my network, I want to be sure that the thing can be secured, and 
sometimes that requires a firmware update. 

If someone sold support contracts (by support contracts, I mean firmware.
I don't need help, I just need the firmware.) for old switches for
less than the value of the switch, I'd buy.If someone sold 
switches with open source firmware, I'd buy.  (I've bought myself an 
OpenGear console server instead of a cheaper used cyclades for similar 
reasons.)  

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Memory vs. Display Card

2009-03-09 Thread Luke S Crawford
Bill Campbell cen...@celestial.com writes:
 I usually go to the Kingston site to find the proper memory for
 specific main boards, and get most of our RAM from newegg.com.
 
   http://www.kingston.com
 
   http://www.newegg.com

I second this, except that I find Kingston often has the best price for
ram, as well as a decent compatability wizard.  make sure to click all the
way through to 'add to cart'  or whatever.  Kingston puts a higher price
on the 'price comparison' page for some reason.  

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Memory vs. Display Card

2009-03-07 Thread Luke S Crawford
Rick el...@spinics.net writes:

 Since memory has become quite cheap lately I decided to move from 2 GB
 to 6. When I installed the memory every thing was fine until I went to
 run level 5. At that point the screen turned to garbage and the system
 froze. Is there a way to fix this so I can use the memory I bought? Do
 I need a new display card?


Have you tried memtest86?  

without a serial console, it'd be hard to see if that's the problem, 
but it is a good place to start.

Often if you have bad memory the problem doesn't show until you use something
that actually uses more of your memory (like starting the GUI)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] Running Fedora 10 (and rawhide) Xen guests/domUs on CentOS 5 dom0

2008-12-13 Thread Luke S Crawford
Pasi Kärkkäinen pa...@iki.fi writes:

 It seems Redhat guys have the packages available here:
 
 http://crobinso.fedorapeople.org/rhel5/install_f10/
 
 With those packages installed to RHEL 5.2 (or CentOS 5.2) you can
 install/use Fedora 10 guests/domUs.


Thanks!  I installed those and started playing about... it looks like
it works.  One problem:  the xenblk driver doesn't seem to be included
in the f10  image when I try to run virt-install, so it can't see any
drives to install on to.  

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] question on sending mail with 5.2

2008-09-13 Thread Luke S Crawford
Ralph Angenendt [EMAIL PROTECTED] writes:

 I really don't understand why people just don't turn off their mailservers if 
 they 
 don't want mail from others.


Most of us have come close.I get north of 500 spams a day unprotected.
I've been using the same email since '01.  I know many others have it worse
than I do.  

At that point, there is no choice about loosing mail.   When sorting that by
hand (and I have)  I delete a significant amount of good mail.  The automated
filters usually do much better than this human when you have 10 spams for every
legitimate mail.  

Rejecting mail from mailservers that don't follow the generally accepted
best practices, I think, is completely reasonable.  It gets rid of a whole
lot of spam, and  the rules are pretty simple and easy to follow, so I think
it is completely reasonable to ask people who run mailservers to put in a 
little effort to setup things like rdns, and to make sure they don't do things
like retry once a minute.

what if my mailserver was rejecting with a 4xx because it was overloaded?
retrying every minute would certainly not help things.  


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] question on sending mail with 5.2

2008-09-12 Thread Luke S Crawford
Jerry Geis [EMAIL PROTECTED] writes:
 In the event I have an important email and I want it try perhaps every
 minute (1minute)
 to send the email how do I accomplish this from the sendmail command line?

considering just how many people use greylisting, this is likely a 
bad idea.Greylisting works by rejecting the first message from a new
server with a 4xx (temporary) error code.   If the server tries again
immediately or never tries again, it's probably a spammer.  If the server
waits a reasonable period of time (say, 30 minutes) and then re-sends the
mail,  it's probably legit, and the greylist program puts that server on the 
whitelist so mail from that server goes through right away next time.

Many people set things up such that if you try again immediately, you get
put on a blacklist, as you are probably a spammer.  

(that said, the answer to your question is what you did with -q1m   But
you probably don't want to do it.)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [Centos] mirroring with LVM?

2008-08-18 Thread Luke S Crawford
Gordon McLellan [EMAIL PROTECTED] writes:
 I'm pulling my hair out trying to setup a mirrored logical volume.
 
 lvconvert tells me I don't have enough free space, even though I have
 hundreds of gigabytes free on both physical volumes.

your problem is that vg1 only has one PV.  if you are mirroring with --corelog
you need a minimum of 2 PVs.  (3 if you  are using --mirrorlog disk)  

You have other PVs, they just aren't available to vg1.  add them with vgextend

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] how to monitor each VM's traffic?

2008-08-08 Thread Luke S Crawford
Rudi Ahlers [EMAIL PROTECTED] writes:

 Sure, that's for XEN, but it's not very effective. I need graph the traffic
 for each VM, not the vif - the vifs tend to change on a reboot, and also
 reset with the stats.

set vifnam=xenname in the vif=[] statement and  you can give the interfaces
symbolic names that don't change every reboot (the snmp mib number still
changes, but the snmp name stays the same.  you need to setup cacti to
map  the names to numbers often. )

snmpd in the dom0 will then report for each interface as if the dom0 was a 
switch, and you can use cacti or mrtg or whatever to aggrigate interface 
counts.  cacti or mrtg or whatever will take care of dealing with reboots
resetting the counters.

Like any layer2 bridge, you need to be careful of your arp cache... if someone
poisions your arp cache, all traffic will go to all DomUs, messing up your
counter.   But I've had plenty of co-lo providers with that problem on 
physical switches, so maybe that is acceptable.

That said, at prgmr.com, I just run bandwidthd at the head of my network.
I hang bandwidthd off of a SPAN port attached to my uplink.The big problem
here is that it only supports ipv4.  the v6 traffic is free.  free!  but it
works with whatever virt tech you use as long as you trust the to/from IP
addresses, and as long as all your traffic is IPv4
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] Bonding and Xen

2008-07-15 Thread Luke S Crawford
Victor Padro [EMAIL PROTECTED] writes:
 Does anyone has implemented this sucessfully?

I have not used bonding with xen, but once you have a bonded interface in the 
Dom0 it should be trivial.  setup your bonded interface as usual, then in
/etc/xend-config.sxp where it says (network-script network-bridge) 
set it to

(network-script 'network-bridge netdev=bond0')

it should just work.


 These servers are meant to replace MS messaging and intranet webservers
 which holds up to 5000 hits per day and thousands of mails, and probably the
 Dom0 could not handle this kind of setup with only one 100mbps link, and
 could not afford changing all the networking hardware to gigabit, at least
 not yet.


100Mbps is a whole lot of bandwidth for a webserver unless you are serving 
video or large file downloads or something. 100Mbps is enough to choke a
very powerful mailserver, nevermind exchange.

I suspect that if you are using windows on Xen, disk and network I/O to and
from the windows DomU will be a bigger problem than network speeds.  Are
you using the paravirtualized windows drivers?  without them, network and
disk IO is going to feel pretty slow in windows, no matter how fast the
actual network or disk is.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 5.2 and Xen

2008-06-24 Thread Luke S Crawford
Ruslan Sivak [EMAIL PROTECTED] writes:
 Tom Lanyon wrote:
  On 24/06/2008, at 9:08 AM, Luke S Crawford wrote:
  We were discussing memory limits of the free (as in beer) closed source
  citrix xensource product-  limits are added to the free product in order
  to encourage people to upgrade to the more expensive products.
 
  From what I understand, Citrix does provide source to their product,
 so other then licensing, how is it different from open source?

like most of the dual-licensed products, if you pay you get support, and 
a nice GUI admin tool.  The Citrix XenSource product has another
advantage that is worth paying for: Paravirtualized windows drivers-  
Citrix/XenSource will provide you with stable paravirt disk and network 
drivers.  Very important things, if you plan on doing serious work with
your windows guest.  

Of course, I'm all *NIX, so yeah, for me there isn't much difference.  
But if you are running windows, Citrix/XenSource provides some compelling
value.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 5.2 and Xen

2008-06-23 Thread Luke S Crawford
Tom Lanyon [EMAIL PROTECTED] writes:
 I haven't been following the thread, but has the discussion been about
 memory limits of Xen?

We were discussing memory limits of the free (as in beer) closed source 
citrix xensource product-  limits are added to the free product in order
to encourage people to upgrade to the more expensive products.  

These limits don't exist in the open-source xen product, which is what
the centos/Xen stuff is based on.

http://tx.downloads.xensource.com/downloads/docs/user/#SECTION0113

 Am I going to face any issues wanting to run some CentOS 5.x x86_64
 boxes with 16 or 32 GB memory as Xen hosts with up to 10 or 12 GB
 memory CentOS 5.x x86_64 domUs ?

I've personally run CentOS x86_64 5.1 boxes with north of 16G ram- 
there is nothing I am aware of that would stop you from putting as much 
ram as you want in a particular DomU.  

 Furthermore, am I going to encounter issues running CentOS 5.x i386
 boxes with 8 GB memory trying to run CentOS 4.x i386 domUs with 3.5 or
 4GB memory?

You will be needing PAE, but that is default for CentOS i386/xen, so it
should Just Work.  make sure you install the libc6-xen package (should
be installed as a dependency.)  The usual PAE limits apply.  

I'm typing this message in emacs running on a DomU hosted on an i386/PAE box 
with 6G ram running CentOS5.1/xen.  

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 5.2 and Xen

2008-06-17 Thread Luke S Crawford
Ruslan Sivak [EMAIL PROTECTED] writes:

 Luke S Crawford wrote:
  It is PAE.

 If it's PAE, then I'm a bit confused, as they advertise it as *Native
 64-bit hypervisor:* Scalability and support for enterprise
 applications

heh.  looks like I wasn't paying attention.  A long time ago, I believe the 
xensource product (3.1?) was i386-PAE only-  and 32-on-64 is 32-PAE
on 64, so you won't be able to run non-pae 32-bit guests in paravirt
mode.

 Well I have up to 4GB of run windows and I can have the other 4GB for
 dom0, so if I can get OpenVZ or linux vserver running on there, I can
 use that to run my linux VM's.

But xenexpress limits you to 4Gb of physical ram total
see http://www.xensource.com/Documents/XenServer41ProductOverview.pdf 
so if you have 4Gb in the DomU, you can't use another 4Gb in the Dom0

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 5.2 and Xen

2008-06-15 Thread Luke S Crawford
Ruslan Sivak [EMAIL PROTECTED] writes:
  running vmware under a xenU guest wouldn't lift any ram limit
  imposed by the xen kernel or dom0.

...

 The 4GB limit is artificial, and only applies to the vm's started
 using their closed source XenSource.  The host OS is most likely
 CentOS 5, and sees the whole 8GB (although it's not x64, so I'm
 guessing they use PAE or something.)

It is PAE.

 I only need 8GB of ram support, and no other features that are offered
 in XenStandard, so it seems kind of a waste to pay $1k per server for
 that. If another virtualization technology was installed on that OS,
 you can get the use of the other 4GB, and if not, I can always run my
 apps on Dom0, although I'd prefer to not install too much stuff on
 Dom0.

First,  The Dom0 OS runs as a guest of the Xen hypervisor-  it is just
a guest that happens to have access to the PCI bus as well.  The Xen
hypervisor still controls what ram and CPU all domains including the Dom0,
 can see;  if the xen kernel is limiting you to 4G ram total, that 
limit will apply in the Dom0 as well.

Also, you are not going to be able to run a virtualization technology that
uses the hardware virtualization support from within a Xen guest, even
if that Xen guest happens to be the Dom0.   The Xen hypervisor
controls access to those instructions.  

You can run virtualization technologies that don't require HVM-   OpenVZ and
linux vserver will both work fine.  Heck, you can do that within an 
unprivileged Xen DomU, but that won't help you if you want to run
windows.  



___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos 5.2 and Xen

2008-06-11 Thread Luke S Crawford
[EMAIL PROTECTED] writes:
 If you only have 512mb of ram, there's almost no reason to virtualize. 
 Windows needs a minimum of 128-512MB to run stable.  I highly suggest that 
 you get more RAM - its very cheap these days.  

seconded.  my standard server has 8G unbuffered ecc.  Newegg sells 
2x2Gb packs of unbuffered ECC kingston brand ddr2 for under $100.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820134312

No reason, really, to not fill your motherboard with ram.  

 If you want to dedicate a box to virtualization, and won't be using more then 
 4GB of ram for your virtual machines - I highly recommend xenserver express.  
 Its free, but has much better performance then vmware.  

the free (closed) xensource product is good... I also wanted to point out 
the new gpl windows pv drivers:

http://wiki.xensource.com/xenwiki/XenWindowsGplPv/

you could use them with the standard open-source Xen, or even with the 
Xen support distributed with CentOS 5, and avoid the ram limits all together.
(well, there is a limit to the open-source xen, but it's ridiculous;  most
of us won't hit it for several years, at least.)  

still kinda beta, but something to watch.  


 I wonder if it can be combined with other technologies - KVM, openVZ, etc to 
 give more then 4GB of ram for virtualization?  I tried installing vmware, but 
 it wouldn't run under.a xen kernel.  

running vmware under a xenU guest wouldn't lift any ram limit imposed by the 
xen kernel or dom0.

the 4Gb limit is added to the free (closed source) citrix xen product
so that people have a reason to pay for the full version...  really,
if you need more than 4G, pay for full xensource, or use the open-source
Xen/open source pv drivers.


I do know some people that run linux vserver guests under a Linux Xen DomU-
that seemed to work ok.

and just for fun, I've run a Xen kernel/Dom0 under a Xen HVM DomU. 
Performance wasn't great;   I don't think I'd do it in production, but
it worked, and was a neat experiment.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Hardening CentOS by removing hacker tools

2008-06-06 Thread Luke S Crawford
Filipe Brandenburger [EMAIL PROTECTED] writes:
 My boss asked me to harden a CentOS box by removing hacker tools,
 such as nmap, tcpdump, nc (netcat), telnet, etc.

Removing network tools does not make it harder to break into the box, 
however, it can make it harder to do something with it once you are in.
removing those tools might help keep an infection from spreading, but it
wont protect the box itself.  (also, just installing the programs just 
means that if your box get compromised, the hacker needs to install 
some new packages.  Not difficult, even without root-  the attacker
can install to the compromised user homedir.)  

It sounds like your boss doesn't know much about this.  you have 2
choices...  You can do what he says (largely useless.)  or you can try to 
educate yourself (and your boss) on ways to actually make your systems more 
secure.

I would advise the latter course, personally, -  if the boss is a good 
boss, he will listen to his technical people.  

here are the basics: 

First, turn off all daemons you don't need.  if it's not running, you 
don't need to worry if there is a security hole in it.  

I think a good firewall is useful... it saves your ass if you
accidentally leave a daemon running that you don't need, or if
the new guy starts up a demon that you weren't running before, or if 
you need a daemon to be accessibly to the office but not the world.  use the 
centos iptables default setup-  make sure you can take the box offline,
then change the, default to 'reject' and then open things
up one service at a time until your system works again.  

third, subscribe to the announce list for your distro-  and check it 
every day.   apply security updates immediately (you can't just do this
with cron;  some require reboots)  

also, make sure that PermitRootLogin is set to no in /etc/ssh/sshd_config
-  all of the successful brute-force attacks I've seen have been against
the root user.  Brute-forcing other users is more difficult, as the
attacker (usually an automated process) needs to first obtain the 
username;  if you watch /var/log/secure you see a lot more attempts at root
than others.

if you use applications that are not provided by your distro's standard
distribution, subscribe to the mailing lists for those, as well.

the idea being that the majority of hacks are known exploits... if you
watch the mailing lists, you can at least solve the known problems 
soon after they become generally known.  

those are the minimum steps you need to take... it's thousands of times
better than nothing.these are the 'easy' steps that get you a lot
of security while minimally interfering with usability


going beyond here, you must recognize that in the optimal case, there
is a tradeoff between usability and security. this is the optimal
case;  sometimes you can make things less usable without increasing 
security.


Beyond here, look at selinux, look at mounting all user-accessible partitions
(/tmp, /home/ and /var)  as noexec and ensuring that nobody but root can
write anywhere else...-  it doesn't help if you get rooted, but it
makes things mildly more difficult for a local user to run a local root
exploit.  

some people remove development tools, because many people transport exploit
code as c source code to the box, compile it and then execute it.  

many other things can be done... but don't bother until you take down 
unnecessary demons, put up a firewall, subscribe to the announce lists
for your distro, and disable remote root login.  
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Re: several servers

2008-06-05 Thread Luke S Crawford
[EMAIL PROTECTED] writes:
 ok..i can install dovecot+postfix+MYSQL..etc..and maybe the problem it's
 resolve.
 i don't have problem with the machines, the machines are goods, my problem
 is the tranparent receive e-mails to the users than are distributed in
 four machines with the same number the users and different users in each.
 as does google??. they have one entry to e-mail system (gmail.com) but
 they have several machine (maybe thousands) in transparent mode.
 i read something like LVS...some comment about this software..! is my
 solution or I'm lost?
 Roberto.-

many ways to do this.  


The easiest option is to put customers on mail1.isp.com ... have
the server at isp.com just have a bunch of aliases pointing [EMAIL PROTECTED]
to [EMAIL PROTECTED]  ... 

the disadvantages are that the end users need to know to set their
pop/imap clients to maily.isp.com, where y will be different for  different
users, and your aliases file will get big, and moving users from one
mailserver to another gets complicated (usually you put everyone on mail1
until it is almost full, then you put all new users on mail2, until it gets
full, etc...)  but incoming mail to [EMAIL PROTECTED] will work, and this is by 
far the easiest to set up and it will scale pretty well.

at an ISP where I worked during the .com, we did this, but we wanted all 
users to go to mail.isp.com to get their mail.  we wrote a little C program 
that  listened on port 113 until the pop client issued 'user username'  and
then did a lookup (in the aliases file, incidentally... this got really
slow after we passed a million accounts-  I re-wrote it to use MySQL and 
things performed ok again)  to see where the user was and forwarded the 
rest of the pop3 session to the correct server.  Really this is a minor
tweak and only saves you the trouble of making your users know which mailserver
they need to connect to.


Finally, do a search on Cyrus murder essentially the cyrus mailserver 
includes support for clustering your mailservers.  It's not redundant 
(e.g. if you loose a physical server you can loose data) but it performs 
well, from what I hear. Cyrus is generally considered to be an 
'industrial strength' mailserver.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] my domU from jailtime.org using latests xen kernel freezees

2008-05-22 Thread Luke S Crawford
Kai Schaetzl [EMAIL PROTECTED] writes:
 David Hláèik wrote on Thu, 22 May 2008 13:34:17 +0200:
 
  disk = [ 'tap:aio:/home/xen/webdev/webdev_root.img,sda1,w',
  'tap:aio:/home/xen/webdev/webdev_swap.img,sda2,w' ]
 
 I suggest using file: instead of tap:aio (I haven't tested this, but it's 
 been said here or elsewhere several times that it is faster than tap:aio)  
 I've been using it all the time with good results.)

the tap driver is quite a bit faster than what file: does (file: 
mounts the file as a loopback device, and passes in the loopback device to
the DomU. the tap driver directly manipulates the blocks in the file.)

for more details:

http://www.usenix.org/events/usenix05/tech/general/full_papers/short_papers/warfield/warfield_html/index.html

  root = /dev/sda1 ro
 
 You do not need that, comment it out.

You need that if you are not using pygrub.  

 Btw: *why* do you want to use a jailtime image if you can just 
 install/kickstart a CentOS 5 VM in no time?

I agree this is probably the best course of action if you want CentOS 5. 
virt-install will give you a working system with pygrub (and 
I think pygrub is keen.) 
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS] Convert a real system in a DomU

2008-05-16 Thread Luke S Crawford
Sergio Belkin [EMAIL PROTECTED] writes:
 dd if=/dev/sdaX of=fedora6.img (on FC6)

of course, you can't do this if you are booting off /dev/sdaX-  boot
into a rescue disk or something.

 and then on Centos 5.1
 
 dd if=fedora6.img of=/dev/sdaX
 
 Could I run this system into Xen?

assuming you sized things correctly, yes.

Personally I find it easier to just make .tar.gz files of the entire system.

A few things you need to do to the fc image before you start.

you want to install the xen kernel and make an entry for it in PyGrub
before you move to step 1.   fix the /etc/fstab to match what you call your
disks in the xm config file.   make sure the initrd you make has
--preload xenblk

make sure you run a getty on the console (for CentOS, the console is xvc0
I assume it is the same for fc)   so add xvc0 to /etc/securetty and run 
a getty on it in /etc/inittab

after this you can make the image as above.

your config file needs to be something like this:
bootloader = /usr/bin/pygrub
memory = 512
name = fc_test
vif = []
disk = [
'phy:/dev/sdaX,sda,w'
]

it should 'just work'
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Best Motherboard

2008-05-15 Thread Luke S Crawford
Ryan Nichols [EMAIL PROTECTED] writes:
 Really? We bought that EXACT motherboard.. 10 to be exact and we've had 9
 fail and the 10th is on its way to major failure.. the odd thing is that
 10th one was the first one purchased and that was 6 months ago.

Unless you have many hundreds of servers I would not expect that failure
rate from the cheapest 'free with purchase of CPU' motherboards.  (assuming 
they were not returns.  Never buy a returned motherboard.)  

Are you using ESD protection?

Seriously.  I worked at one place where we bought SuperMicro SuperServers
and assembled them ourselves.About 1 in 3 were bad before being
put in production, and the ones that we did get to production had weird
problems like failed NIC cards months later.

I put in a strict anti-static regime (grounded conductive mats on the floor, 
the  table and grounded foot and wrist straps)   after that, we built 
another 70 servers.  Only one failed, and they were rock-solid once in 
production.

granted, the static problem at this office was noticeable-  you would 
walk across the room and touch something grounded and get zapped.  But
you can kill a motherboard with a much smaller ESD than you can feel

But being overly paranoid during assembly provably results in fewer pages
in the middle of the night later on.

___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Best Motherboard

2008-05-15 Thread Luke S Crawford
Simon Jolle sjolle [EMAIL PROTECTED] writes:
 What are the advantages of building your own server comparing with
 products from HP, Dell and IBM? Is it cheaper?

I find that if you order the base package from Dell, you get a pretty
good deal.  sometimes better than buying the parts alone.  But if you want
more ram, disk, or CPU, (and the base system is pretty anemic) you usually 
end up paying twice market rate for the parts if you buy those upgrades from 
dell. 

the other vendors are  similar, only their kit is much nicer, and the 
prices across the board are higher.   

I do really like the HP ILO on the high-end boxes that let you ssh into the 
ILO card-  Much better than IPMI, imo.  but really a external 
network-accessable rebooting power strip and a FreeBSD box with a rocketport 
multi-serial card in the rack does the same thing at a lower cost, and
I'm more comfortable with the security on a FreeBSD box than on the 
ILO card.  

Personally, I find that the most advantageous setup is often to buy the
pre-built chassis/motherboard kit from SuperMicro or Intel, and then get
the rest of the parts from Newegg or Ingram Micro.

See, there is usually only a very small premium for the chassis/motherboard
assembly,  which I think is worth it because I don't have to screw with the 
cooling system, the board fits the chassis just right, and almost all of the 
assembly work is done.But, at the same time, I get to pay commodity 
prices for ram, cpu and disk. 

my new servers are intel SR1530AHLX chassis/motherboard combos,
which I get from whatever reseller is currently cheapest, 
they are nice, but the chipset isn't yet supported by memtest86, but
it is supported by bluesmoke, so good enough.

I use core2quad q6600 CPUs (I buy them at fry's, on sale)  and 8Gb of
crucial unbuffered ECC ddr2, which I usually get at newegg.  (I think ECC 
is very important and worth the (rather small) premium-  I don't think 
buffering is worth the required upgrades-  I could get two or three of
this kit for the price of a xeon/fbdimm setup that is only slightly 
faster.)   I then put in 2x1Tb sata drives (usualy the consumer-grade 
kind rather than the enterprise kind, which only makes sense because 
everything is mirred and I live near the co-lo.)  

total cost is around $1100-1300 for a quad-core box with 8Gb of ram and 
1Tb of mirrored storage.   You can do the same with higher-end kit,
of course, replacing my vendors with others, and you can usually 
save a good chunk of change over getting the whole thing from HP/IBM/Dell.  

Of course, if you have the budget, there is a support advantage to getting
everything from the same place, but with my labor costs, the premium isn't
worth it.The problem is that the support advantage isn't that great-
You still usually can't just ship the box back to the vendor saying It's 
broken  - they run a cursory check and if it's clean, they send it back.

Usually determining for sure that there is in fact a hardware problem 
(and you or the vendor needs to do this before the vendor will fix it)
also tells you what hardware is bad-  and once you know what the bad part is,
the only overhead is looking up the proper vendor address.  

My experience has been that I am usually better at finding hardware errors
than dell (and rackable, and HP)  I attribute it to the fact that if the
dell tech doesn't find the problem, he gets to go home early.  If I don't
find the problem, the thing crashes and my pager wakes me up sunday morning.

(in my experience, sending ram back to say, corsair,  or disks back to,
for example, seagate  after a proper diagnosis is far more likely to get 
me a new, working part than sending the whole kit back to dell or rackable
with an It's broken)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos