Re: [Users] Otopi pre-seeded answers and firewall settings

2014-03-23 Thread Joshua Dotson
Giuseppe, et. al

I gave up on my six-server hosted engine install, partly for this reason.
In addition to this problem, I found that I couldn't use a bridge of my own
naming.  Then, trying to associate interfaces with bridges in the web
interface, my hand-tuned bridges were fatally clobbered.  Like, the files I
wrote by hand in /etc/sysconfig/ifcfg-*, bridges, and interfaces (some with
VLANs) alike.  And other things... like the Westmere vs. Ivy Bridge thing.

Anyway, I think what's happening to your install is that iptables on the
host is getting clobbered by the automatic install that happens when the
hosted-engine setup script finally contacts the engine the for the first
time.  I'm not sure how to keep this from happening, but it's a place to
start.  And I think it's the reason your setting False didn't help.  By the
way, it took a two hour test for me to learn that even removing the
/etc/sysconfig/iptables file AND stopping AND disabling iptables via
systemctl on both host and engine did nothing to combat this behavior.

Back when I set up 3.0, I saw similar behavior.  At that time though, the
iptables thing wasn't fatal.  I observed here that this overwriting and
enabling/starting of iptables causes the very lest part of the
hosted-engine setup script to fail miserably.  As a result of the engine
not being able to contact the host at the end of its install phase, the
H/A configuration is never done.  This is my theory, anyway.

I think oVirt should leave the firewall _completely_ alone and just
document what ports should be open.  I don't think we need that special
line at the bottom of /etc/sysconfig/iptables oVirt puts in there.   I'll
stop rambling now.  :-)  I like oVirt, but getting so far into this that I
have a have two hour turnaround every time I want to test a minor tweak is
just too much.  I think this will get better in time, I hope.  At that
time, maybe I'll try again.

Here's what libvirt has to say about iptables vs. bridges:



The final step is to disable netfilter on the bridge:

 # cat  /etc/sysctl.conf EOF
 net.bridge.bridge-nf-call-ip6tables = 0
 net.bridge.bridge-nf-call-iptables = 0
 net.bridge.bridge-nf-call-arptables = 0
 EOF
 # sysctl -p /etc/sysctl.conf

It is recommended to do this for performance and security reasons. See Fedora
bug #512206 https://bugzilla.redhat.com/512206. Alternatively you can
configure iptables to allow all traffic to be forwarded across the bridge:

# echo -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT 
/etc/sysconfig/iptables-forward-bridged
# lokkit --custom-rules=ipv4:filter:/etc/sysconfig/iptables-forward-bridged
# service libvirtd reload

  source:
http://wiki.libvirt.org/page/Networking#Creating_network_initscripts

You might be interested to know that you can pre-populate vm.conf.in in
/usr/share, before the install.  Here was mine:

vmId=@VM_UUID@

memSize=@MEM_SIZE@

display=@CONSOLE_TYPE@

devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0,
bus:1, 
type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}

devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
slot:0x06, domain:0x, type:pci,
function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}

devices={device:scsi,model:virtio-scsi,type:controller}

devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}

vmName=@NAME@

spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir

smp=@VCPUS@

cpuType=@CPU_TYPE@

emulatedMachine=@EMULATED_MACHINE@

devices={nicModel:pv,macAddr:00:16:3e:3d:78:10,linkActive:true,network:brbaseboard,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:ab3f9ae9-1d1b-432e-997d-f3458f89cf10,address:{bus:0x01,
slot:0x01, domain:0x, type:pci,
function:0x0},device:bridge,type:interface}

devices={nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x01,
slot:0x02, domain:0x, type:pci,
function:0x0},device:bridge,type:interface@BOOT_PXE@}

devices={nicModel:pv,macAddr:00:16:3e:3d:78:30,linkActive:true,network:brstorage,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:ab3f9ae9-1d1b-432e-997d-f3458f89cf30,address:{bus:0x01,
slot:0x03, domain:0x, type:pci,
function:0x0},device:bridge,type:interface}

devices={nicModel:pv,macAddr:00:16:3e:3d:78:40,linkActive:true,network:brcompute,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:ab3f9ae9-1d1b-432e-997d-f3458f89cf40,address:{bus:0x01,
slot:0x04, domain:0x, type:pci,
function:0x0},device:bridge,type:interface}


[Users] CPU: Westmere vs. Ivy Bridge

2014-03-18 Thread Joshua Dotson
Hello,

The nodes on the cluster I'm installing have Ivy Bridge
CPUshttp://ark.intel.com/products/75790/Intel-Xeon-Processor-E5-2630-v2-15M-Cache-2_60-GHz.
The latest choice I see in the hosted-engine installer is Westmere.  Am I
missing out on anything significant by choosing Westmere in this case?  Is
there any plan to add Ivy Bridge as an option?

Even if there is no difference behind the scenes from choosing Westmere,
having an Ivy Bridge option would have saved me the time it took to reverse
engineer the need to look at the source code to the need for the CPU
family: instruction sets (performance?).

Thanks,
Joshua
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Post-Install Engine VM Changes Feasible?

2014-03-15 Thread Joshua Dotson
Hi,

I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using hosted
engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.

I have a layered networking topology ((V)LANs for public, internal,
storage, compute and ipmi).  I am comfortable doing the bridging for each
interface myself via /etc/sysconfig/network-scripts/ifcfg-*.

Here's my desired topology:
http://www.asciiflow.com/#Draw6325992559863447154

Here's my keepalived setup:
https://gist.github.com/josh-at-knoesis/98618a16418101225726

I'm writing a lot of documentation of the many steps I'm taking.  I hope to
eventually release a distributed introspective all-in-one (including
distributed storage) guide.

Looking at vm.conf.in, it looks like I'd by default end up with one
interface on my engine, probably on my internal VLAN, as that's where I'd
like the control traffic to flow.  I definitely could do NAT, but I'd be
most happy to see the engine have a presence on all of the LANs, if for no
other reason than because I want to send backups directly over the storage
VLAN.

I'll cut to it:  I believe I could successfully alter the vdsm template (
vm.conf.in) to give me the extra interfaces I require.  It hit me, however,
that I could just take the defaults for the initial install.  Later, I
think I'll be able to come back with virsh and make my changes to the
gracefully disabled VM.  Is this true?

[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/

Thanks,
Joshua
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Hosted Engine + iSCSI on 3.4.0rc

2014-03-02 Thread Joshua Dotson
Hi,

I'm thinking of trying a iSCSI (provided by a resource external to oVirt)
as backing for a hosted engine environment.   Is this feasible?  Would it
have any benefit over using GlusterFS + self-hosted engine?

Thanks,
Joshua
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users