Re: Virtualize or bare-metal?

2014-01-14 Thread Giancarlo Razzolini
Em 14-01-2014 06:49, Renaud Allard escreveu:
>
> To be fair, virtualizing stuff without a common shared storage is a
> little bit useless. The biggest power of virtualization is to be able
> to move VMs between physical hosts or even powering on physical hosts
> when you need more power.
>
> But security wise, just to cite Theo:
> x86 virtualization is about basically placing another nearly full
> kernel, full of new bugs, on top of a nasty x86 architecture which
> barely has correct page protection. Then running your operating system
> on the other side of this brand new pile of shit.
>
> You are absolutely deluded, if not stupid, if you think that a
> worldwide collection of software engineers who can't write operating
> systems or applications without security holes, can then turn around
> and suddenly write virtualization layers without security holes.
I've never said that virtualization is secure. Some recent work on the
field even prove that virtualization is almost impossible to do in a
secure way. See the Gal Diskin talk on 30c3. But the demand is ever
rising and power and cooling costs go along. That's why I use
virtualization, not only it provides a better usage of resources, but it
reduces the power bill. And in a world that is going to face more and
more blackouts in the near future, that is a great thing. I've reduced
my 10 server farm to just two, using 1/5 of the power that I used
before, and faster. You can easily improve the hardware of two machines.
But try improving the hardware of 10, spending the same amount of money.
That's why I didn't blink when choosing to virtualize everything.

Cheers,

-- 
Giancarlo Razzolini
GPG: 4096R/77B981BC



Re: Virtualize or bare-metal?

2014-01-14 Thread Renaud Allard

On 01/14/2014 05:49 AM, Giancarlo Razzolini wrote:

Em 14-01-2014 01:11, Christopher Ahrens escreveu:


What I meant by bare-metal was if I should run a bunch of services on
the same installation of OpenBSD.



I've run in the same physical space issue with my company servers and
didn't think twice to use virtualization. But, as pointed by others, you
could easily accommodate all your services into one openbsd server with
chroot's. But I disagree when they compare chroot directly with a vm
hypervisor, because there are many things it can do, that a chroot
can't. I've been using linux with qemu/kvm. Lots of pci devices
passthrough's that work like a charm (there are potential security
issues, worth noting). I believe that the other obvious choice is Xen. I
would not go with virtualbox. And Vmware is expensive. Qemu/kvm tights
nicely into the system so it's my choice. You should make your own choice.

Cheers,



To be fair, virtualizing stuff without a common shared storage is a 
little bit useless. The biggest power of virtualization is to be able to 
move VMs between physical hosts or even powering on physical hosts when 
you need more power.


But security wise, just to cite Theo:
x86 virtualization is about basically placing another nearly full 
kernel, full of new bugs, on top of a nasty x86 architecture which 
barely has correct page protection. Then running your operating system 
on the other side of this brand new pile of shit.


You are absolutely deluded, if not stupid, if you think that a worldwide 
collection of software engineers who can't write operating systems or 
applications without security holes, can then turn around and suddenly 
write virtualization layers without security holes.




Re: Virtualize or bare-metal?

2014-01-13 Thread Matt M
I personally wouldn't advise using a single bare-metal machine just for
dhcp, a separate one for dns, a separate one for sendmail etc. Seems like a
huge waste of resources to me. My opinion is that you would fare better, as
was suggested earlier, to use some of the other bare-metal machines for
more intensive tasks like Apache. And, I always like to have a spare box or
two to experiment with different things on, so I would keep one just for
that if it were me. Virtualizing is great for testing and experimenting,
but sometimes you can't beat a real machine for that.


On Tue, Jan 14, 2014 at 12:50 AM, Christopher Ahrens wrote:

> Matthew Weigel wrote:
>
>> On 1/13/2014 9:11 PM, Christopher Ahrens wrote:
>>
>>> Jack Woehr wrote:
>>>
 Christopher Ahrens wrote:

>
> Wish I could split everything off to physical, but all I have for
> space for is a mini-rack that fits under my desk in my apartment
>

 Sounds like you have answered your own question!


>>> What I meant by bare-metal was if I should run a bunch of services on
>>> the same
>>> installation of OpenBSD.
>>>
>>
>> Well, hardware failures on a small pool of machines are still hardware
>> failures on a small pool of machines, whether you have virtual servers or
>> not.
>>
>> For security, chroot (especially with privilege separation) accomplishes
>> a lot
>> of what virtualization claims to offer, with a much longer history of
>> auditing
>> and better understood weaknesses.
>>
>> It is usually easier, in my experience, to manage one system running many
>> services in individual chroot environments than to manage many (virtual)
>> systems.  Files in chroot environments will sometimes need to be updated
>> when
>> you change the main system, but in my experience this is a much easier
>> task to
>> identify and manage than applying those changes en masse to a collection
>> of
>> virtual hosts.  Plus, there will be plenty of system updates to the main
>> system that don't need to trickle down to the chroot environments, but
>> will
>> almost always need to be applied individually to each virtual host.
>>
>> You may still want to physically separate some concerns if you have enough
>> machines (e.g., build machines vs. service machines, spreading out
>> disk-intensive services, etc.), but in general I don't think
>> virtualization
>> will particularly help you.
>>
>>
>
> OK, I think I'll try loading multiple services onto single machines, I'm
> thinking that I could always just attach a bunch of carp interfaces (one
> for each service) to the machine then if I want to move that service to
> another machine (Virtual or physical) I just destroy the carp interface and
> recreate it on the new one.
>
> At this point my plan is to use a pair of machines for a specific category
> (to allow for a machine failure or allow for update cycles with no
> downtime), each pair would handle one of Public internet services
> (external-facing DNS, Public Web server, SMTP filtering), internal services
> (Internal DNS, LDAP, CA), or business applications (Wiki, Mail Store / IMAP
> access, source code control).  The last two boxes used as spare and to test
> virtualization options.
>
> I am just not using a single machine for multiple roles (I cut my teeth on
> Windows 2000/2003 and picked up some bad habits and obsolete advice)



Re: Virtualize or bare-metal?

2014-01-13 Thread Christopher Ahrens

Matthew Weigel wrote:

On 1/13/2014 9:11 PM, Christopher Ahrens wrote:

Jack Woehr wrote:

Christopher Ahrens wrote:


Wish I could split everything off to physical, but all I have for
space for is a mini-rack that fits under my desk in my apartment


Sounds like you have answered your own question!



What I meant by bare-metal was if I should run a bunch of services on the same
installation of OpenBSD.


Well, hardware failures on a small pool of machines are still hardware
failures on a small pool of machines, whether you have virtual servers or not.

For security, chroot (especially with privilege separation) accomplishes a lot
of what virtualization claims to offer, with a much longer history of auditing
and better understood weaknesses.

It is usually easier, in my experience, to manage one system running many
services in individual chroot environments than to manage many (virtual)
systems.  Files in chroot environments will sometimes need to be updated when
you change the main system, but in my experience this is a much easier task to
identify and manage than applying those changes en masse to a collection of
virtual hosts.  Plus, there will be plenty of system updates to the main
system that don't need to trickle down to the chroot environments, but will
almost always need to be applied individually to each virtual host.

You may still want to physically separate some concerns if you have enough
machines (e.g., build machines vs. service machines, spreading out
disk-intensive services, etc.), but in general I don't think virtualization
will particularly help you.




OK, I think I'll try loading multiple services onto single machines, I'm 
thinking that I could always just attach a bunch of carp interfaces (one 
for each service) to the machine then if I want to move that service to 
another machine (Virtual or physical) I just destroy the carp interface 
and recreate it on the new one.


At this point my plan is to use a pair of machines for a specific 
category (to allow for a machine failure or allow for update cycles with 
no downtime), each pair would handle one of Public internet services 
(external-facing DNS, Public Web server, SMTP filtering), internal 
services (Internal DNS, LDAP, CA), or business applications (Wiki, Mail 
Store / IMAP access, source code control).  The last two boxes used as 
spare and to test virtualization options.


I am just not using a single machine for multiple roles (I cut my teeth 
on Windows 2000/2003 and picked up some bad habits and obsolete advice)




Re: Virtualize or bare-metal?

2014-01-13 Thread Giancarlo Razzolini
Em 14-01-2014 01:11, Christopher Ahrens escreveu:
>
> What I meant by bare-metal was if I should run a bunch of services on
> the same installation of OpenBSD.
>

I've run in the same physical space issue with my company servers and
didn't think twice to use virtualization. But, as pointed by others, you
could easily accommodate all your services into one openbsd server with
chroot's. But I disagree when they compare chroot directly with a vm
hypervisor, because there are many things it can do, that a chroot
can't. I've been using linux with qemu/kvm. Lots of pci devices
passthrough's that work like a charm (there are potential security
issues, worth noting). I believe that the other obvious choice is Xen. I
would not go with virtualbox. And Vmware is expensive. Qemu/kvm tights
nicely into the system so it's my choice. You should make your own choice.

Cheers,

-- 
Giancarlo Razzolini
GPG: 4096R/77B981BC



Re: Virtualize or bare-metal?

2014-01-13 Thread Matthew Weigel
On 1/13/2014 9:11 PM, Christopher Ahrens wrote:
> Jack Woehr wrote:
>> Christopher Ahrens wrote:
>>>
>>> Wish I could split everything off to physical, but all I have for
>>> space for is a mini-rack that fits under my desk in my apartment
>>
>> Sounds like you have answered your own question!
>>
> 
> What I meant by bare-metal was if I should run a bunch of services on the same
> installation of OpenBSD.

Well, hardware failures on a small pool of machines are still hardware
failures on a small pool of machines, whether you have virtual servers or not.

For security, chroot (especially with privilege separation) accomplishes a lot
of what virtualization claims to offer, with a much longer history of auditing
and better understood weaknesses.

It is usually easier, in my experience, to manage one system running many
services in individual chroot environments than to manage many (virtual)
systems.  Files in chroot environments will sometimes need to be updated when
you change the main system, but in my experience this is a much easier task to
identify and manage than applying those changes en masse to a collection of
virtual hosts.  Plus, there will be plenty of system updates to the main
system that don't need to trickle down to the chroot environments, but will
almost always need to be applied individually to each virtual host.

You may still want to physically separate some concerns if you have enough
machines (e.g., build machines vs. service machines, spreading out
disk-intensive services, etc.), but in general I don't think virtualization
will particularly help you.
-- 
 Matthew Weigel
 hacker
 unique & idempot . ent



Re: Virtualize or bare-metal?

2014-01-13 Thread Christopher Ahrens

Jack Woehr wrote:

Christopher Ahrens wrote:


Wish I could split everything off to physical, but all I have for
space for is a mini-rack that fits under my desk in my apartment


Sounds like you have answered your own question!



What I meant by bare-metal was if I should run a bunch of services on 
the same installation of OpenBSD.




Re: Virtualize or bare-metal?

2014-01-13 Thread Jack Woehr

Christopher Ahrens wrote:


Wish I could split everything off to physical, but all I have for space for is a mini-rack that fits under my desk in 
my apartment


Sounds like you have answered your own question!

--
Jack Woehr   # "We commonly say we have no time when,
Box 51, Golden CO 80402  #  of course, we have all that there is."
http://www.softwoehr.com # - James Mason, _The Art of Chess_, 1905



Re: Virtualize or bare-metal?

2014-01-13 Thread Christopher Ahrens

L. V. Lammert wrote:

On Mon, 13 Jan 2014, Christopher Ahrens wrote:


I have recently inherited a set of high-spec machines that I intend to
use for OpenBSD.  I am planning on using these machines for DNS, HTTP,
mail, LDAP, netboot, build system for following -stable, etc.  So my
question is, is it recommended to load all these
services on a single instance OpenBSD running on bare metal or to
virtualize and use much smaller OpenBSD virtual machines?


It would be much better to use a set of small machines (we use older
Compaq 386s & 486s) for most of those servers, .. save the 'big iron' for
a web server where it might be beneficial.

Virtualization does not make sense for core services - higher chance of a
single failure taking down multiple services and security can be a
problem.

Lee




Wish I could split everything off to physical, but all I have for space 
for is a mini-rack that fits under my desk in my apartment. (hosting 
services around here are insane, especially the $200+ per incident costs 
if you need to do something.


Since I have 8 of these machines, I was planning on setting up 
duplicated machines on each Virtualization host and carp across them.




Re: Virtualize or bare-metal?

2014-01-13 Thread L. V. Lammert
On Mon, 13 Jan 2014, Christopher Ahrens wrote:

> I have recently inherited a set of high-spec machines that I intend to
> use for OpenBSD.  I am planning on using these machines for DNS, HTTP,
> mail, LDAP, netboot, build system for following -stable, etc.  So my
> question is, is it recommended to load all these
> services on a single instance OpenBSD running on bare metal or to
> virtualize and use much smaller OpenBSD virtual machines?
>
It would be much better to use a set of small machines (we use older
Compaq 386s & 486s) for most of those servers, .. save the 'big iron' for
a web server where it might be beneficial.

Virtualization does not make sense for core services - higher chance of a
single failure taking down multiple services and security can be a
problem.

Lee



Virtualize or bare-metal?

2014-01-13 Thread Christopher Ahrens

I have recently inherited a set of high-spec machines that I intend to
use for OpenBSD.  I am planning on using these machines for DNS, HTTP,
mail, LDAP, netboot, build system for following -stable, etc.  So my 
question is, is it recommended to load all these

services on a single instance OpenBSD running on bare metal or to
virtualize and use much smaller OpenBSD virtual machines?

If the recommendation is to virtualize, what platform should I use?







The dmesg of the systems I'll be using:

OpenBSD 5.4 (GENERIC.MP) #41: Tue Jul 30 15:30:02 MDT 2013
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 34314244096 (32724MB)
avail mem = 33393082368 (31846MB)
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb4c0 (57 entries)
bios0: vendor American Megatrends Inc. version "2.0b" date 09/17/2012
bios0: Supermicro X9SCD
acpi0 at bios0: rev 2
acpi0: sleep states S0 S1 S4 S5
acpi0: tables DSDT FACP APIC FPDT MCFG HPET SSDT PRAD SPMI SSDT SSDT DMAR
acpi0: wakeup devices PS2K(S4) PS2M(S4) UAR1(S4) P0P1(S4) USB1(S4)
USB2(S4) USB3(S4) USB4(S4) USB5(S4) USB6(S4) USB7(S4) PXSX(S4) RP01(S4)
PXSX(S4) RP02(S4) PXSX(S4) [...]
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.56 MHz
cpu0:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
cpu0: apic clock running at 100MHz
cpu1 at mainbus0: apid 2 (application processor)
cpu1: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.02 MHz
cpu1:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS
cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 0, core 1, package 0
cpu2 at mainbus0: apid 4 (application processor)
cpu2: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.02 MHz
cpu2:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS
cpu2: 256KB 64b/line 8-way L2 cache
cpu2: smt 0, core 2, package 0
cpu3 at mainbus0: apid 6 (application processor)
cpu3: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.02 MHz
cpu3:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS
cpu3: 256KB 64b/line 8-way L2 cache
cpu3: smt 0, core 3, package 0
cpu4 at mainbus0: apid 1 (application processor)
cpu4: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.02 MHz
cpu4:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS
cpu4: 256KB 64b/line 8-way L2 cache
cpu4: smt 1, core 0, package 0
cpu5 at mainbus0: apid 3 (application processor)
cpu5: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.02 MHz
cpu5:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS
cpu5: 256KB 64b/line 8-way L2 cache
cpu5: smt 1, core 1, package 0
cpu6 at mainbus0: apid 5 (application processor)
cpu6: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.02 MHz
cpu6:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS
cpu6: 256KB 64b/line 8-way L2 cache
cpu6: smt 1, core 2, package 0
cpu7 at mainbus0: apid 7 (application processor)
cpu7: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz, 3300.02 MHz
cpu7:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,POPCN