Hi,
Is it possible to have a shared Storage domain between 2 Datacenter in
oVirt?
We do replicate a FC Volume between 2 datacenter using FC SAN storage
technology and we have an oVirt cluster on each site defined in separate
DCs. The idea behind this is to setup a DR site and also balance the
On Fri, Apr 22, 2016 at 10:31 AM, Budur Nagaraju wrote:
> HI
>
> I have configured hosted engine with two hosts ,one of the hosted engine is
> down and unable to make it active .
>
> Is there anyways to fix the issue ? I have restarted ha-agent and ha-broker
> but no luck.
If still not solved, pl
Hi,
In oreder to figure out the problem please open a bug detailing the error
you get.
The exact version you are trying to upgrade to and attach the
ovirt-engine-dwh.log and the engine-setup log.
Thank you,
Shirly Radco
BI Software Engineer
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th f
On Sun, Apr 24, 2016 at 3:11 AM, Pat Riehecky wrote:
> I realize now I shouldn't have set the default cluster name,
You have to manually create such a cluster in the engine after
'engine-setup' is finished and before telling '--deploy' to continue.
> is there a way
> I can resume the install of
Hi
I have in production the scenery something similar to what you've described.
The "enabling factor" is represented by an "storage virtualization" set of
appliances, that maintain mirrored logical volume over fc physical volumes
across two distinct datacenters, while giving rw simultaneus access
Greetings oVirt Family;
Due to catastrophic power failure, my datacenter lost power. I am using a
CentOS7 server to provide ISCSI services to my OVirt platform.
When the power came back on, and the iscsi server booted back up, the filters
in lvm.conf were faulty and LVM assumed control over the
On Sun, May 1, 2016 at 1:35 AM, wrote:
> El 2016-04-30 23:22, Nir Soffer escribió:
>>
>> On Sun, May 1, 2016 at 12:48 AM, wrote:
>>>
>>> El 2016-04-30 22:37, Nir Soffer escribió:
On Sat, Apr 30, 2016 at 10:28 PM, Nir Soffer wrote:
>
>
> On Sat, Apr 30, 2016 at 7:16 P
El 2016-04-30 23:22, Nir Soffer escribió:
On Sun, May 1, 2016 at 12:48 AM, wrote:
El 2016-04-30 22:37, Nir Soffer escribió:
On Sat, Apr 30, 2016 at 10:28 PM, Nir Soffer
wrote:
On Sat, Apr 30, 2016 at 7:16 PM, wrote:
El 2016-04-30 16:55, Nir Soffer escribió:
On Sat, Apr 30, 2016 at
It's very hard to understand your flow when time moves backwards.
Please try again from a clean state. Make sure all hosts have same clock.
Then document the exact time you do stuff - starting/stopping a host,
checking status, etc.
Some things to check from your logs:
in agent.host01.log:
MainT
On Sun, May 1, 2016 at 3:31 PM, wrote:
> El 2016-04-30 23:22, Nir Soffer escribió:
>>
>> On Sun, May 1, 2016 at 12:48 AM, wrote:
>>>
>>> El 2016-04-30 22:37, Nir Soffer escribió:
On Sat, Apr 30, 2016 at 10:28 PM, Nir Soffer wrote:
>
>
> On Sat, Apr 30, 2016 at 7:16 P
El 2016-05-01 14:01, Nir Soffer escribió:
On Sun, May 1, 2016 at 3:31 PM, wrote:
El 2016-04-30 23:22, Nir Soffer escribió:
On Sun, May 1, 2016 at 12:48 AM, wrote:
El 2016-04-30 22:37, Nir Soffer escribió:
On Sat, Apr 30, 2016 at 10:28 PM, Nir Soffer
wrote:
On Sat, Apr 30, 2016 at
Hi,
I have a two node + engine ovirt setup, and I was having problems
doing a live migration between nodes. I looked in the vdsm logs and
noticed selinux errors, so I checked the selinux config, and both the
ovirt-engine host and one of the nodes had selinux disabled. So I
thought I would enable i
On Fri, Apr 29, 2016 at 9:20 AM, Sandro Bonazzola
wrote:
>
>
> On Thu, Apr 28, 2016 at 11:06 PM, Beckman, Daniel <
> daniel.beck...@ingramcontent.com> wrote:
>
>> Hello,
>>
>>
>>
>> I’m trying to setup oVirt for the first time using hosted engine. This is
>> on a Dell PowerEdge R720 (512GB RAM),
Hi, before to start target cli you should remove all lvm auto-imported
volumes:
dmsetup remove_all
Then restart your targetcli.
Am 01.05.2016 1:51 nachm. schrieb "Clint Boggio" :
> Greetings oVirt Family;
>
> Due to catastrophic power failure, my datacenter lost power. I am using a
> CentOS7 serve
hello,
i have problem for delete one snapshot.
output the script vm-disk-info.py
Warning: volume 023110fa-7d24-46ec-ada8-d617d7c2adaf is in chain but illegal
Volumes:
a09bfb5d-3922-406d-b4e0-daafad96ffec
after running the md5sum command I realized that the volume change is the
base:
a
Thank you so much Arman. With use of that command, I was able to restore
service.
I really appreciate the help
> On May 1, 2016, at 2:58 PM, Arman Khalatyan wrote:
>
> Hi, before to start target cli you should remove all lvm auto-imported
> volumes:
> dmsetup remove_all
> Then restart your ta
You will need to provide the hosted-engine setup log to see which
gluster command failed to execute.
On 04/30/2016 10:10 PM, Langley, Robert wrote:
I’m attempting to host the engine within a GlusterFS Replica 3 storage
volume.
During setup, after entering the server and volume, I’m receivin
17 matches
Mail list logo