Updated info:
https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
Looks like Intel is now committing to support Sandy/Ivy Bridge.
No mention of Westmere or earlier as of yet :-(
On 1/26/2018 10:13 AM, WK wrote:
That cpu is X5690. That is Westmere
checked /etc/mtab ?
On 01/26/2018 06:10 PM, Alex Bartonek wrote:
I'm stumped.
I powercycled my server on accident and I cannot mount my data drive. I
was getting buffer i/o errors but finally was able to boot up by
disabling automount in fstab.
I cannot mount my ext4 drive. Anything
I'm stumped.
I powercycled my server on accident and I cannot mount my data drive. I was
getting buffer i/o errors but finally was able to boot up by disabling
automount in fstab.
I cannot mount my ext4 drive. Anything else I can check?
root@blitzen t]# dmesg|grep sdb
[1.714138] sd
Hello there,
I just installed oVirt on brend new machines. The engine on a Virtualbox
VM in my current infrastructure, and a single dedicated CentOS host
attached to the engine.
Here are my host specs (srvhc02):
CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (4C 8T)
Mem: 32GB DDR3
Disks:
- 2
HI Yaniv
I looked at your reference and it looks like I have almost the same thing,
but I was looking to take it a step further with virtual numa node. Is it
necessary to really do it for extra performance? Below is my code snippet,
it is not cleaned up yet, but works. I am also looking at
Hello there,
I just installed oVirt on brend new machines. The engine on a Virtualbox
VM in my current infrastructure, and a single dedicated CentOS host
attached to the engine.
I am getting extremely poor write performance in my oVirt Windows 2012R2
VM, whatever the virtio, virtio-scsi device
I believe about 50% overhead or even more...
Am 26.01.2018 7:40 nachm. schrieb "Christopher Cox" :
> Does it matter? This is just one of those required things. IMHO, most
> companies know there will be impact, and I would think they would accept
> any informational
Does it matter? This is just one of those required things. IMHO, most
companies know there will be impact, and I would think they would accept
any informational measurement after the fact.
There are probably only a few cases where timing is so limited to where
a skew would matter.
Just
That cpu is X5690. That is Westmere class. We have a number of those
doing 'meatball' application loads that don't need the latest greatest cpu.
I do not yet believe the Microcode fix for Westmere is out yet and it
may never be.
Intel has, so far, promised fixes for Haswell or better
I've been considering hyperconverged oVirt setup VS san/nas but I wonder
how the meltdown patches have affected glusterFS performance since it is
CPU intensive. Has anyone who has applied recent kernel updates noticed a
performance drop with glusterFS?
If a disk fails (aka node fails) assuming no RAID, once the disk gets
replaced, the node will rebuild?
Is that what will happen or am I over simplifying?
On 2018-01-26 09:02, Yaniv Kaul wrote:
On Fri, Jan 26, 2018 at 4:58 PM, wrote:
Yaniv,
You bring up a valid
On Wed, Jan 24, 2018 at 11:18 PM, Don Dupuis wrote:
> I am able to create a vm using the sdk with nic and disks using the python
> sdk, but having trouble understanding how to assign it to virtual numanode
> onto the physical numanode via python sdk. Any help in this area
On Fri, Jan 26, 2018 at 4:58 PM, wrote:
> Yaniv,
>
> You bring up a valid point.
>
> I asked about RAID since I was concerned about drive failures &
> performance.
> Since Gluster will handle data replication, using HBA seems like a better
> choice?
>
Without local node
I am able to create a vm using the sdk with nic and disks using the python
sdk, but having trouble understanding how to assign it to virtual numanode
onto the physical numanode via python sdk. Any help in this area would be
greatly appreciated
Thanks
Don
Hi: I installed version 4.2.0 ovirt, compiled through source code.
At 2018-01-23 00:01:54, "Sandro Bonazzola" wrote:
2018-01-22 12:48 GMT+01:00 Pym :
Hi:
I'm installing ovirt-4.2 through the source code,
Hi, I'm curious, may I ask on which
Yaniv,
You bring up a valid point.
I asked about RAID since I was concerned about drive failures &
performance.
Since Gluster will handle data replication, using HBA seems like a
better choice?
From Yaniv:
I think there are two interesting questions here:
1. Why would you want RAID? Your
On Fri, Jan 26, 2018 at 1:15 PM, Gianluca Cecchi
wrote:
>
> On Fri, Jan 26, 2018 at 12:56 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> I'm trying to update an environment from 4.1.7 to 4.1.9.
>>
>>
>
>> I don't see errors in vdsm logs of
Thanks all for the info .. so it seem that will have to wait 4.2.2. like
last comment in this issue is specifying
https://github.com/oVirt/ovirt-web-ui/issues/460
Regards
Carl
On Fri, Jan 26, 2018 at 6:43 AM, Giorgio Biacchi
wrote:
> It seems it's a bug. There's
Dear Luca,
I've created a config with two 16 port 10GBe switches,those does not connected
to each other.
I have 4 hosts and 1 storage host. I don't know why, but I can't ping from all
of host to all of hosts.
For example I can ping from .1 to .2 but not .3! From .3 I can ping .4 but
don't .1
Hi Demeter,
i don't know how the switch setup is done here, but as far as i know
they aren't stacked through interconnect.
Luca
On Fri, Jan 26, 2018 at 1:44 PM, Demeter Tibor wrote:
> Dear Luca,
>
> Sorry for the very late reply..
>
> I want to try out now your mentions.
>
So when I run a copy operation it is successful. Move and export do not
work.
On Fri, Jan 26, 2018 at 6:57 AM, Donny Davis wrote:
> When I try to do it via the API I get a more descriptive response
> Cannot export Virtual Disk. Disk configuration (${volumeFormat}
>
Dear Luca,
Sorry for the very late reply..
I want to try out now your mentions.
Should I make an interconnect the two switches in active/backup mode?
Thanks!
Tibor
- 2017. dec.. 12., 19:35, Luca 'remix_tj' Lorenzetto
írta:
> Hi,
> If you're looking at
On Fri, Jan 26, 2018 at 12:56 PM, Gianluca Cecchi wrote:
> Hello,
> I'm trying to update an environment from 4.1.7 to 4.1.9.
>
>
> I don't see errors in vdsm logs of target host, but I do see this in
> /var/log/messages of target host
>
> Jan 26 12:39:51 ov200
When I try to do it via the API I get a more descriptive response
Cannot export Virtual Disk. Disk configuration (${volumeFormat}
${volumeType}) is incompatible with the storage domain type.
Makes sense, it was imported from 4.1
So now the question is how can I fix this storage domain
On Fri,
Hello,
I'm trying to update an environment from 4.1.7 to 4.1.9.
Already migrated the engine (separate) and one host.
This host is now running a pair of VMs, that was powered off and then
powered on it via "run once" feature.
Now I'm trying to evacuate VMs from other hosts and get all to 4.1.9.
It seems it's a bug. There's already another thread here with this subject:
Ovirt 4.2 Bug with Permissons on the Vm Portal?
I've enabled ovirt 4.2 pre-release repo but the problem is still present in
version 4.2.1.3-1.el7.centos
Somewhere i read that will be fixed in 4.2.2, I'm waiting...
I am trying to copy a disk that was from a storage domain that was imported
from 4.1 to a newer storage domain and this exception is thrown in the UI
Uncaught exception occurred. Please try reloading the page. Details:
Exception caught: (TypeError) : Cannot read property 'g' of null
Please have
I have been trying to get this worked out myself.
Firstly someone with a system permission will be able to see things from
the system level. I have been adding the permission at the cluster level,
but I also just can't seem to figure out the user portal in 4.2. they can
either see it all or
forgot to mention that the latest microcode update was a rollback of
previous updates:)
more info you can find there:
https://access.redhat.com/errata/RHSA-2018:0093
Am 26.01.2018 10:50 vorm. schrieb "Gianluca Cecchi" <
gianluca.cec...@gmail.com>:
> Hello,
> nice to see integration of
you should download microcode from the intel web page and overwrite the
/lib/firmware/intel-ucode or so...please check the readme.
Am 26.01.2018 10:50 vorm. schrieb "Gianluca Cecchi" <
gianluca.cec...@gmail.com>:
Hello,
nice to see integration of Spectre-Meltdown info in 4.1.9, both for guests
Hello,
nice to see integration of Spectre-Meltdown info in 4.1.9, both for guests
and hosts, as detailed in release notes:
I have upgraded my CentOS 7.4 engine VM (outside of oVirt cluster) and one
oVirt host to 4.1.9.
Now in General -> Software subtab of the host I see:
OS Version: RHEL - 7 -
31 matches
Mail list logo