[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-12-02 Thread Abhishek Sahni
you try with a test vm, if this happens after a Virtual Machine
> migration ?
>
> What are your mount options for the storage domain ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via
> Users  написа:
>
>
>
>
>
>
>
>
> Hello,
>
>
>
> I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS
> shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside
> the VMs.
>
>
>
> For random reasons VM’s gets corrupted, sometimes halting it or just being
> silent corrupted and after a reboot the system is unable to boot due to
> “corruption of in-memory data detected”. Sometimes the corrupted data are
> “all zeroes”, sometimes there’s data there. In extreme cases the XFS
> superblock 0 get’s corrupted and the system cannot even detect a XFS
> partition anymore since the magic XFS key is corrupted on the first blocks
> of the virtual disk.
>
>
>
> This is happening for a month now. We had to rollback some backups, and I
> don’t trust anymore on the state of the VMs.
>
>
>
> Using xfs_db I can see that some VM’s have corrupted superblocks but the
> VM is up. One in specific, was with sb0 corrupted, so I knew when a reboot
> kicks in the machine will be gone, and that’s exactly what happened.
>
>
>
> Another day I was just installing a new CentOS 8 VM for random reasons,
> and after running dnf -y update and a reboot the VM was corrupted needing
> XFS repair. That was an extreme case.
>
>
>
> So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong
> on the system. No errors logged on dmesg, nothing on /var/log/messages and
> no errors on the “zpools”, not even after scrub operations. On the switch,
> a Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There
> are no “up and down” and zero errors on all interfaces (we have a 4x Port
> LACP on the TrueNAS side and 2x Port LACP on each hosts), everything seems
> to be fine. The only metric that I was unable to get is “dropped packages”,
> but I’m don’t know if this can be an issue or not.
>
>
>
> Finally, on oVirt, I can’t find anything either. I looked on
> /var/log/messages and /var/log/sanlock.log but there’s nothing that I found
> suspicious.
>
>
>
> Is there’s anyone out there experiencing this? Our VM’s are mainly CentOS
> 7/8 with XFS, there’s 3 Windows VM’s that does not seems to be affected,
> everything else is affected.
>
>
>
> Thanks all.
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLYSE7HC
> FNWTWFZZTL2EJHV36OENHUGB/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:
> https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZ5E55LJMA7Y5XUAIXBH2FMGYSUU27EV/
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFOSOGER2VD56UO7FNK63FV4CXQTXNHB/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4PSHLVEWQ6GVYNRGVVIRUL2REMMV2AK/
>


-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BHAV4FSQRWGKPKK3Z5NVCTG5MDATUIAY/


[ovirt-users] Re: Can't manually migrate VM's (4.3.0)

2019-03-26 Thread Abhishek Sahni
I have applied the hotfix as mentioned in the below link and restarted
*vdsmd*, it resolves the error. However, I am still not sure if the
migration issue was cuz this bug as I can migrate the machines again.

https://bugzilla.redhat.com/show_bug.cgi?id=1690301

https://gerrit.ovirt.org/#/c/98499/

On Tue, Mar 26, 2019 at 3:15 PM Abhishek Sahni 
wrote:

> Just upgraded all the hosts from 4.2.8 to 4.3.2 - everything is running
> fine as of now apart from this is issue - while manually migrating VMS to
> other hosts having enough capacity I am getting the same error -
>
> "Could not fetch data needed for VM migrate operation" over UI
>
> I can see the same VDSM errors on each node,
>
> ==
> ERROR Internal server error
> Traceback (most recent
> call last):
>   File
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
> _handle_request
> res = method(**params)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in
> _dynamicMethod
> result =
> fn(*methodArgs)
>   File "", line 2,
> in getAllVmStats
>   File
> "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
> ret = func(*args,
> **kwargs)
>   File
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1388, in getAllVmStats
> statsList =
> self._cif.getAllVmStats()
>   File
> "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 567, in
> getAllVmStats
> return [v.getStats()
> for v in self.vmContainer.values()]
>   File
> "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1766, in getStats
> oga_stats =
> self._getGuestStats()
>   File
> "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1967, in
> _getGuestStats
> stats =
> self.guestAgent.getGuestInfo()
>   File
> "/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py", line 505, in
> getGuestInfo
> del qga['appsList']
> KeyError: 'appsList'
> =
>
> Please let me know if this relevant to the same issue?
>
>
>
>
> On Fri, Mar 1, 2019 at 9:40 PM Greg Sheremeta  wrote:
>
>> You are correct -- it's new. We just discovered it about 2 weeks ago.
>> That's how I knew to point you to it :D
>>
>> Best wishes,
>> Greg
>>
>> On Fri, Mar 1, 2019 at 11:03 AM Ron Jerome  wrote:
>>
>>> Bingo!!  That fixed the issue.
>>>
>>> Thanks Greg.
>>>
>>> Just one final question on this...  Is that parameter new?  I could have
>>> sworn that I just cut and pasted that section of the docs (modifying as
>>> appropriate)  into my squid.conf file when I set it up.
>>>
>>> Ron.
>>>
>>> On Fri, 1 Mar 2019 at 10:02, Greg Sheremeta  wrote:
>>>
>>>>
>>>> On Fri, Mar 1, 2019 at 9:47 AM Ron Jerome  wrote:
>>>>
>>>>> Thanks Michal,
>>>>>
>>>>> I think we are onto something here.  That request is getting a 401
>>>>> unauthorized response...
>>>>>
>>>>> ssl_access_log:10.10.10.41 - - [01/Mar/2019:09:26:46 -0500] "GET
>>>>> /ovirt-engine/api/vms/?search=id=dc0a6167-3c36-48e4-9cca-d69303037859
>>>>> HTTP/1.1" 401 71
>>>>>
>>>>> I guess it should be noted here that I'm accessing the engine through
>>>>> a squid proxy on one of the hosts.  I just tested a direct connection to
>>>>> the engine (without going through the proxy) and it works, so the next
>>>>> question is how to fix the proxy issue?  Could this be an SSL certificate
>>>>> issue?
>>&g

[ovirt-users] Re: Can't manually migrate VM's (4.3.0)

2019-03-26 Thread Abhishek Sahni
t; wrote:
>>>>>>>>>
>>>>>>>>>> I've toggled all the hosts into and out of maintenance, and VM's
>>>>>>>>>> migrate off of each as expected, but I still can't manually initiate 
>>>>>>>>>> a VM
>>>>>>>>>> migration from the UI.  Do you have any hints as to where to look 
>>>>>>>>>> for error
>>>>>>>>>> messages?
>>>>>>>>>>
>>>>>>>>>> Thanks in advance,
>>>>>>>>>>
>>>>>>>>>> Ron.
>>>>>>>>>>
>>>>>>>>>> On Mon, 25 Feb 2019 at 19:56, Ron Jerome 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It's a 3 node cluster, each node has 84G RAM, and there are only
>>>>>>>>>>> two two other VM's running, so there should be plenty of capacity.
>>>>>>>>>>>
>>>>>>>>>>> Automatic migration works, if I put a host into Maintenance, the
>>>>>>>>>>> VM's will migrate.
>>>>>>>>>>>
>>>>>>>>>>> Ron
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Feb 25, 2019, 6:46 PM Greg Sheremeta, <
>>>>>>>>>>> gsher...@redhat.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Turns out it's a bad error message. It just means there are no
>>>>>>>>>>>> hosts available to migrate to.
>>>>>>>>>>>>
>>>>>>>>>>>> Do you have other hosts up with capacity?
>>>>>>>>>>>>
>>>>>>>>>>>> Greg
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Mon, Feb 25, 2019 at 3:01 PM Ron Jerome 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> I've been running 4.3.0 for a few weeks now and just
>>>>>>>>>>>>> discovered that I can't manually migrate VM's from the UI.  I get 
>>>>>>>>>>>>> an error
>>>>>>>>>>>>> message saying: "Could not fetch data needed for VM migrate
>>>>>>>>>>>>> operation"
>>>>>>>>>>>>>
>>>>>>>>>>>>> Sounds like
>>>>>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?format=multiple=1670701
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ron.
>>>>>>>>>>>>> ___
>>>>>>>>>>>>> Users mailing list -- users@ovirt.org
>>>>>>>>>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>>>>>>> oVirt Code of Conduct:
>>>>>>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>>>>>>> List Archives:
>>>>>>>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OBUNZHUPVEDZ5YLTXI2CQEPBQGBZ2JT/
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> GREG SHEREMETA
>>>>>>>>>>>>
>>>>>>>>>>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>>>>>>>>>> Red Hat NA
>>>>>>>>>>>>
>>>>>>>>>>>> <https://www.redhat.com/>
>>>>>>>>>>>>
>>>>>>>>>>>> gsher...@redhat.comIRC: gshereme
>>>>>>>>>>>> <https://red.ht/sig>
>>>>>>>>>>>>
>>>>>>>>>>> ___
>>>>>>>>>> Users mailing list -- users@ovirt.org
>>>>>>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>>>> oVirt Code of Conduct:
>>>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>>>> List Archives:
>>>>>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VVSXK3WO4AA5B7T6LGEYWRBNAO56G46V/
>>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> GREG SHEREMETA
>>>>>>>
>>>>>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>>>>> Red Hat NA
>>>>>>>
>>>>>>> <https://www.redhat.com/>
>>>>>>>
>>>>>>> gsher...@redhat.comIRC: gshereme
>>>>>>> <https://red.ht/sig>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> GREG SHEREMETA
>>>>>>
>>>>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>>>> Red Hat NA
>>>>>>
>>>>>> <https://www.redhat.com/>
>>>>>>
>>>>>> gsher...@redhat.comIRC: gshereme
>>>>>> <https://red.ht/sig>
>>>>>>
>>>>>
>>>>>
>>>
>>> --
>>>
>>> GREG SHEREMETA
>>>
>>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>>
>>> Red Hat NA
>>>
>>> <https://www.redhat.com/>
>>>
>>> gsher...@redhat.comIRC: gshereme
>>> <https://red.ht/sig>
>>>
>>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.comIRC: gshereme
> <https://red.ht/sig>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EC3LX3MVXEQGBV55T634ITFPRJUILU7/
>


-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSXHDC5JXJ2V5YYQFWT7YSMBWTAGU4YV/


[ovirt-users] Re: upgrading failed from 4.2 to 4.3

2019-03-14 Thread Abhishek Sahni
Hey Simone,

Thanks much. :-)

successfully upgraded.


On Fri, Mar 15, 2019 at 1:49 AM Simone Tiraboschi 
wrote:

>
>
> On Thu, Mar 14, 2019 at 1:18 PM Abhishek Sahni <
> abhishek.sahni1...@gmail.com> wrote:
>
>> Hello Users,
>>
>> I am trying to upgrade ovirt-engine running on bare metal, following
>> below guide,
>>
>> https://ovirt.org/release/4.3.0/
>>
>> and it is getting failed every time with the below errors,
>>
>> engine-setup
>>
>
> If you are sure that you are not in global maintenance mode you can
> execute engine-setup with:
> engine-setup
> --otopi-environment=OVESETUP_CONFIG/continueSetupOnHEVM=bool:True
> to skip that check.
>
> At the end I suggest to ensure you correctly disable ovirt-ha-agent and
> ovirt-ha-broker on all the hosts.
>
>
>>
>> 
>> [ INFO  ] Stage: Setup validation
>>   During execution engine service will be stopped (OK, Cancel)
>> [OK]:
>> [ ERROR ] It seems that you are running your engine inside of the
>> hosted-engine VM and are not in "Global Maintenance" mode.
>>  In that case you should put the system into the "Global
>> Maintenance" mode before running engine-setup, or the hosted-engine HA
>> agent might kill the machine, which might corrupt your data.
>>
>> [ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup
>> detected, but Global Maintenance is not set.
>> [ INFO  ] Stage: Clean up
>>   Log file is located at
>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190314171026-wicgh1.log
>> [ INFO  ] Generating answer file
>> '/var/lib/ovirt-engine/setup/answers/20190314171042-setup.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>> [ ERROR ] Execution of setup failed
>> ===
>>
>> I was using hosted engine on nodes earlier but later remove the nodes and
>> using ovirt egine on seperate bare-metal.
>>
>> Any pointers?
>>
>>
>> --
>>
>> ABHISHEK SAHNI
>> Mob : +91-990-701-5143
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SPFMXYNVHO7Y5FP7W5W5U3ERWB5NAXCR/
>>
>

-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AHAUNNXOGNIHRIF3MJXSSYCTXE3KRC64/


[ovirt-users] upgrading failed from 4.2 to 4.3

2019-03-14 Thread Abhishek Sahni
Hello Users,

I am trying to upgrade ovirt-engine running on bare metal, following below
guide,

https://ovirt.org/release/4.3.0/

and it is getting failed every time with the below errors,

engine-setup


[ INFO  ] Stage: Setup validation
  During execution engine service will be stopped (OK, Cancel)
[OK]:
[ ERROR ] It seems that you are running your engine inside of the
hosted-engine VM and are not in "Global Maintenance" mode.
 In that case you should put the system into the "Global
Maintenance" mode before running engine-setup, or the hosted-engine HA
agent might kill the machine, which might corrupt your data.

[ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup
detected, but Global Maintenance is not set.
[ INFO  ] Stage: Clean up
  Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20190314171026-wicgh1.log
[ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20190314171042-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed
===

I was using hosted engine on nodes earlier but later remove the nodes and
using ovirt egine on seperate bare-metal.

Any pointers?


-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SPFMXYNVHO7Y5FP7W5W5U3ERWB5NAXCR/


[ovirt-users] Is it possible to migrate self hosted engine to bare metal?

2018-12-19 Thread Abhishek Sahni
Hello Everyone,

Do we have some steps where we can migrate self hosted engine to separate
bare metal machine.

I do have a recent backup of DB from the self hosted engine?



-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5K7V755NWEZPZOVAREBXVHTDTJYGZG6B/


[ovirt-users] Re: (no subject)

2018-12-04 Thread Abhishek Sahni
Hello,

I can confirm hosted engine itself is stopping glusterd service on the
node1,

- Is it possible to remove the pending task from DB - as I am able to halt
the engine vm from failing by stopping ovirt-engine service via ssh?

- Is it possible to stop gluster service on the cluster ( without having
GUI access) so that engine will not manage host gluster service?



On Tue, Dec 4, 2018 at 2:22 PM Abhishek Sahni 
wrote:

> Corrections,
>
> Hello Sahina,
>
> [root@node2 ~]# hosted-engine --get-shared-config mnt_options
> --type=he_shared
>
>
> mnt_options : , type : he_shared
>
>
> [root@node2 ~]#
>
> [root@node2 ~]# hosted-engine --set-shared-config mnt_options
> backup-volfile-servers=192.168.2.2:192.168.2.3  --type=he_shared
>
> [root@node2 ~]# hosted-engine --get-shared-config mnt_options
> --type=he_shared
>
> mnt_options : backup-volfile-servers=192.168.2.2:192.168.2.3, type :
> he_shared
>
> I have restarted the broker and agent on node2 and try to start HE vm but
> still it is not starting, seems like bricks are still not getting mounted
> from backup-volfile-servers.
>
> Thanks in advance,
>
> On Tue, Dec 4, 2018 at 2:19 PM Abhishek Sahni <
> abhishek.sahni1...@gmail.com> wrote:
>
>> Hello Sahina,
>>
>> [root@node2 ~]# hosted-engine --get-shared-config mnt_options
>> --type=he_shared
>>
>>
>> mnt_options : , type : he_shared
>>
>>
>> [root@node2 ~]#
>>
>> [root@node2 ~]# hosted-engine --set-shared-config mnt_options
>> backup-volfile-servers=192.168.2.1:192.168.2.1  --type=he_shared
>>
>> [root@node2 ~]# hosted-engine --get-shared-config mnt_options
>> --type=he_shared
>>
>> mnt_options : backup-volfile-servers=192.168.2.1:192.168.2.1, type :
>> he_shared
>>
>> I have restarted the broker and agent on node2 and try to start HE vm but
>> still it is starting, seems like bricks are still not getting mounted from
>> backup-volfile-servers.
>>
>> Thanks in advance,
>>
>>
>>
>>
>>
>>
>>
>>
>
> --
>
> ABHISHEK SAHNI
> Mob : +91-990-701-5143
>
>
>

-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BTKH6BG6DYPKFFYHWQG5R4JCA74UZHWU/


[ovirt-users] Re: (no subject)

2018-12-04 Thread Abhishek Sahni
Corrections,

Hello Sahina,

[root@node2 ~]# hosted-engine --get-shared-config mnt_options
--type=he_shared


mnt_options : , type : he_shared


[root@node2 ~]#

[root@node2 ~]# hosted-engine --set-shared-config mnt_options
backup-volfile-servers=192.168.2.2:192.168.2.3  --type=he_shared

[root@node2 ~]# hosted-engine --get-shared-config mnt_options
--type=he_shared

mnt_options : backup-volfile-servers=192.168.2.2:192.168.2.3, type :
he_shared

I have restarted the broker and agent on node2 and try to start HE vm but
still it is not starting, seems like bricks are still not getting mounted
from backup-volfile-servers.

Thanks in advance,

On Tue, Dec 4, 2018 at 2:19 PM Abhishek Sahni 
wrote:

> Hello Sahina,
>
> [root@node2 ~]# hosted-engine --get-shared-config mnt_options
> --type=he_shared
>
>
> mnt_options : , type : he_shared
>
>
> [root@node2 ~]#
>
> [root@node2 ~]# hosted-engine --set-shared-config mnt_options
> backup-volfile-servers=192.168.2.1:192.168.2.1  --type=he_shared
>
> [root@node2 ~]# hosted-engine --get-shared-config mnt_options
> --type=he_shared
>
> mnt_options : backup-volfile-servers=192.168.2.1:192.168.2.1, type :
> he_shared
>
> I have restarted the broker and agent on node2 and try to start HE vm but
> still it is starting, seems like bricks are still not getting mounted from
> backup-volfile-servers.
>
> Thanks in advance,
>
>
>
>
>
>
>
>

-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PCZXOGNVN3OK3N3AYEY77EYNMP5FKNQL/


[ovirt-users] Re: (no subject)

2018-12-04 Thread Abhishek Sahni
Hello Sahina,

[root@node2 ~]# hosted-engine --get-shared-config mnt_options
--type=he_shared


mnt_options : , type : he_shared


[root@node2 ~]#

[root@node2 ~]# hosted-engine --set-shared-config mnt_options
backup-volfile-servers=192.168.2.1:192.168.2.1  --type=he_shared

[root@node2 ~]# hosted-engine --get-shared-config mnt_options
--type=he_shared

mnt_options : backup-volfile-servers=192.168.2.1:192.168.2.1, type :
he_shared

I have restarted the broker and agent on node2 and try to start HE vm but
still it is starting, seems like bricks are still not getting mounted from
backup-volfile-servers.

Thanks in advance,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PZPL7445I5ZC6VLBZJBWHJ4XIED5ZM45/


[ovirt-users] Hosted Engine goes down while putting gluster node into maintenance mode.

2018-12-03 Thread Abhishek Sahni
Hello Team,

We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

Hosted engine was running on node2.

- While moving NODE1 to maintenance mode along with stopping the
gluster service as it prompts before, Hosted engine instantly went down.

- I start the gluster service back on node1 and start the hosted engine
again and found hosted engine started properly but getting crashed again
and again within frames of second after a successful start because HE
itself stopping glusterd on node1. (not sure) but cross-verified by
checking glusterd status.

*Is it possible to clear pending tasks or not let the HE to stop
glusterd on node1?*

*Or we can start the HE using other gluster node?*

https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg


-- 

ABHISHEK SAHNI


IISER Bhopal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ETASYIKXRAGYZRBZIS6G743UHPKGCNA/


[ovirt-users] (no subject)

2018-12-03 Thread Abhishek Sahni
Hello Team,


We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

Hosted engine was running on node2.

- While moving NODE1 to maintenance mode along with stopping the
gluster service as it prompts before, Hosted engine instantly went down.

- I start the gluster service back on node1 and start the hosted engine
again and found hosted engine started properly but getting crashed again
and again within frames of second after a successful start because HE
itself stopping glusterd on node1. (not sure) but cross-verified by
checking glusterd status.

*Is it possible to clear pending tasks or not let the HE to stop
glusterd on node1?*

*Or we can start the HE using other gluster node?*

https://paste.fedoraproject.org/paste/Qu2tSHuF-~G4GjGmstV6mg





-- 

ABHISHEK SAHNI
IISER Bhopal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/26C32RPGG6OF7L2FUFGCVHYKRWWWX7K7/


[ovirt-users] Re: Recover accidentally deleted gluster node having bricks ( Engine, Data and Vmstore)

2018-11-28 Thread Abhishek Sahni
I just enabled it on default cluster and now the volumes are visible. It
seems like gluster service was disabled by default on cluster.

On Tue, Nov 27, 2018 at 3:51 PM Sahina Bose  wrote:

>
>
> On Tue, Nov 27, 2018 at 3:45 PM Kaustav Majumder 
> wrote:
>
>> I am not sure why ovirt is not showing any volume.
>> Sahina, is this a bug?
>>
>
> Check if gluster service is enabled on the cluster.
> The volumes are managed only if this is true
>
>
>> On Tue, Nov 27, 2018 at 3:10 PM Abhishek Sahni <
>> abhishek.sahni1...@gmail.com> wrote:
>>
>>> Hello Kaustav,
>>>
>>> That's weird, I never saw any volumes under the storage tab since
>>> installation. I am using HC setup deployed using cockpit console.
>>>
>>> https://imgur.com/a/nH9rzK8
>>>
>>> Did I miss something?
>>>
>>>
>>> On Tue, Nov 27, 2018 at 2:50 PM Kaustav Majumder 
>>> wrote:
>>>
>>>> Click on volume for which you want to reset the brick-> under bricks
>>>> tab select the brick you wan to reset -> once you do you will see the
>>>> 'Reset Brick' option is active.
>>>> Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
>>>>
>>>> On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni <
>>>> abhishek.sahni1...@gmail.com> wrote:
>>>>
>>>>> Thanks Sahina for your response, I am not able to find it on UI,
>>>>> please help me to navigate? and yes I am using oVirt 4.2.6.4-1.
>>>>>
>>>>> On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni <
>>>>>> abhishek.sahni1...@gmail.com> wrote:
>>>>>>
>>>>>>> Hello Team,
>>>>>>>
>>>>>>> We are running a setup of 3-way replica HC gluster setup configured
>>>>>>> during the initial deployment from the cockpit console using ansible.
>>>>>>>
>>>>>>> NODE1
>>>>>>>   - /dev/sda   (OS)
>>>>>>>   - /dev/sdb   ( Gluster Bricks )
>>>>>>>* /gluster_bricks/engine/engine/
>>>>>>>* /gluster_bricks/data/data/
>>>>>>>* /gluster_bricks/vmstore/vmstore/
>>>>>>>
>>>>>>> NODE2 and NODE3 with a similar setup.
>>>>>>>
>>>>>>> There is a mishap that /dev/sdb on NODE2 totally got crashed and now
>>>>>>> there is nothing inside. However, I have created similar directories 
>>>>>>> after
>>>>>>> mounting it back i.e.,
>>>>>>>
>>>>>>>* /gluster_bricks/engine/engine/
>>>>>>>* /gluster_bricks/data/data/
>>>>>>>* /gluster_bricks/vmstore/vmstore/
>>>>>>> but it is not yet recovered.
>>>>>>>
>>>>>>> =
>>>>>>> [root@node2 ~]# gluster volume status
>>>>>>> Status of volume: data
>>>>>>> Gluster process TCP Port  RDMA Port
>>>>>>> Online  Pid
>>>>>>>
>>>>>>> --
>>>>>>> Brick *.*.*.1:/gluster_bricks/data/data  49152 0  Y
>>>>>>>  1
>>>>>>> Brick *.*.*.2:/gluster_bricks/data/data  N/A   N/AN
>>>>>>>  N/A
>>>>>>> Brick *.*.*.3:/gluster_bricks/data/data  49152 0  Y
>>>>>>>  4303
>>>>>>> Self-heal Daemon on localhost   N/A   N/AY
>>>>>>>  23976
>>>>>>> Self-heal Daemon on *.*.*.1  N/A   N/AY
>>>>>>>  27838
>>>>>>> Self-heal Daemon on *.*.*.3  N/A   N/AY
>>>>>>>  27424
>>>>>>>
>>>>>>> Task Status of Volume data
>>>>>>>
>>>>>>> --
>>>>>>> There are no active volume tasks
>>>>>>>
>>>&g

[ovirt-users] Re: Recover accidentally deleted gluster node having bricks ( Engine, Data and Vmstore)

2018-11-27 Thread Abhishek Sahni
Thanks a lot. :-)

On Tue, Nov 27, 2018 at 4:22 PM Kaustav Majumder 
wrote:

>
>
> On Tue, Nov 27, 2018 at 4:05 PM Abhishek Sahni 
> wrote:
>
>> That is amazing, resetting bricks resolved the issue.
>>
>> Thanks much Sahina and Kaustav.
>>
>> However, Do we have manual steps to recover those bricks.
>>
> https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/
>
>>
>> On Tue, Nov 27, 2018 at 3:57 PM Abhishek Sahni 
>> wrote:
>>
>>> I just enabled it on default cluster and now the volumes are visible. It
>>> seems like gluster service was disabled by default on cluster.
>>>
>>> On Tue, Nov 27, 2018 at 3:51 PM Sahina Bose  wrote:
>>>
>>>>
>>>>
>>>> On Tue, Nov 27, 2018 at 3:45 PM Kaustav Majumder 
>>>> wrote:
>>>>
>>>>> I am not sure why ovirt is not showing any volume.
>>>>> Sahina, is this a bug?
>>>>>
>>>>
>>>> Check if gluster service is enabled on the cluster.
>>>> The volumes are managed only if this is true
>>>>
>>>>
>>>>> On Tue, Nov 27, 2018 at 3:10 PM Abhishek Sahni <
>>>>> abhishek.sahni1...@gmail.com> wrote:
>>>>>
>>>>>> Hello Kaustav,
>>>>>>
>>>>>> That's weird, I never saw any volumes under the storage tab since
>>>>>> installation. I am using HC setup deployed using cockpit console.
>>>>>>
>>>>>> https://imgur.com/a/nH9rzK8
>>>>>>
>>>>>> Did I miss something?
>>>>>>
>>>>>>
>>>>>> On Tue, Nov 27, 2018 at 2:50 PM Kaustav Majumder 
>>>>>> wrote:
>>>>>>
>>>>>>> Click on volume for which you want to reset the brick-> under bricks
>>>>>>> tab select the brick you wan to reset -> once you do you will see the
>>>>>>> 'Reset Brick' option is active.
>>>>>>> Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
>>>>>>>
>>>>>>> On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni <
>>>>>>> abhishek.sahni1...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Thanks Sahina for your response, I am not able to find it on UI,
>>>>>>>> please help me to navigate? and yes I am using oVirt 4.2.6.4-1.
>>>>>>>>
>>>>>>>> On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni <
>>>>>>>>> abhishek.sahni1...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hello Team,
>>>>>>>>>>
>>>>>>>>>> We are running a setup of 3-way replica HC gluster setup
>>>>>>>>>> configured during the initial deployment from the cockpit console 
>>>>>>>>>> using
>>>>>>>>>> ansible.
>>>>>>>>>>
>>>>>>>>>> NODE1
>>>>>>>>>>   - /dev/sda   (OS)
>>>>>>>>>>   - /dev/sdb   ( Gluster Bricks )
>>>>>>>>>>* /gluster_bricks/engine/engine/
>>>>>>>>>>* /gluster_bricks/data/data/
>>>>>>>>>>* /gluster_bricks/vmstore/vmstore/
>>>>>>>>>>
>>>>>>>>>> NODE2 and NODE3 with a similar setup.
>>>>>>>>>>
>>>>>>>>>> There is a mishap that /dev/sdb on NODE2 totally got crashed and
>>>>>>>>>> now there is nothing inside. However, I have created similar 
>>>>>>>>>> directories
>>>>>>>>>> after mounting it back i.e.,
>>>>>>>>>>
>>>>>>>>>>* /gluster_bricks/engine/engine/
>>>>>>>>>>* /gluster_bricks/data/data/
>>>>>>>>>>* /gluster_bricks/vmstore/vmstore/
>>>>>>>>>> but it is not yet recovered.
>>>>>>>>>>
>>>>>>>>>> ==

[ovirt-users] Re: Recover accidentally deleted gluster node having bricks ( Engine, Data and Vmstore)

2018-11-27 Thread Abhishek Sahni
Hello Kaustav,

That's weird, I never saw any volumes under the storage tab since
installation. I am using HC setup deployed using cockpit console.

https://imgur.com/a/nH9rzK8

Did I miss something?


On Tue, Nov 27, 2018 at 2:50 PM Kaustav Majumder 
wrote:

> Click on volume for which you want to reset the brick-> under bricks tab
> select the brick you wan to reset -> once you do you will see the 'Reset
> Brick' option is active.
> Attached is a screenshot -> https://i.imgur.com/QUMSrzt.png
>
> On Tue, Nov 27, 2018 at 2:43 PM Abhishek Sahni <
> abhishek.sahni1...@gmail.com> wrote:
>
>> Thanks Sahina for your response, I am not able to find it on UI, please
>> help me to navigate? and yes I am using oVirt 4.2.6.4-1.
>>
>> On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose  wrote:
>>
>>>
>>>
>>> On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni <
>>> abhishek.sahni1...@gmail.com> wrote:
>>>
>>>> Hello Team,
>>>>
>>>> We are running a setup of 3-way replica HC gluster setup configured
>>>> during the initial deployment from the cockpit console using ansible.
>>>>
>>>> NODE1
>>>>   - /dev/sda   (OS)
>>>>   - /dev/sdb   ( Gluster Bricks )
>>>>* /gluster_bricks/engine/engine/
>>>>* /gluster_bricks/data/data/
>>>>* /gluster_bricks/vmstore/vmstore/
>>>>
>>>> NODE2 and NODE3 with a similar setup.
>>>>
>>>> There is a mishap that /dev/sdb on NODE2 totally got crashed and now
>>>> there is nothing inside. However, I have created similar directories after
>>>> mounting it back i.e.,
>>>>
>>>>* /gluster_bricks/engine/engine/
>>>>* /gluster_bricks/data/data/
>>>>* /gluster_bricks/vmstore/vmstore/
>>>> but it is not yet recovered.
>>>>
>>>> =
>>>> [root@node2 ~]# gluster volume status
>>>> Status of volume: data
>>>> Gluster process TCP Port  RDMA Port
>>>> Online  Pid
>>>>
>>>> --
>>>> Brick *.*.*.1:/gluster_bricks/data/data  49152 0  Y
>>>>  1
>>>> Brick *.*.*.2:/gluster_bricks/data/data  N/A   N/AN
>>>>  N/A
>>>> Brick *.*.*.3:/gluster_bricks/data/data  49152 0  Y
>>>>  4303
>>>> Self-heal Daemon on localhost   N/A   N/AY
>>>>  23976
>>>> Self-heal Daemon on *.*.*.1  N/A   N/AY
>>>>  27838
>>>> Self-heal Daemon on *.*.*.3  N/A   N/AY
>>>>  27424
>>>>
>>>> Task Status of Volume data
>>>>
>>>> --
>>>> There are no active volume tasks
>>>>
>>>> Status of volume: engine
>>>> Gluster process TCP Port  RDMA Port
>>>> Online  Pid
>>>>
>>>> --
>>>> Brick *.*.*.1:/gluster_bricks/engine/eng
>>>> ine 49153 0  Y
>>>>  7
>>>> Brick *.*.*.2:/gluster_bricks/engine/eng
>>>> ine N/A   N/AN
>>>>  N/A
>>>> Brick *.*.*.3:/gluster_bricks/engine/eng
>>>> ine 49153 0  Y
>>>>  4314
>>>> Self-heal Daemon on localhost   N/A   N/AY
>>>>  23976
>>>> Self-heal Daemon on *.*.*.3  N/A   N/AY
>>>>  27424
>>>> Self-heal Daemon on *.*.*.1  N/A   N/AY
>>>>  27838
>>>>
>>>> Task Status of Volume engine
>>>>
>>>> --
>>>> There are no active volume tasks
>>>>
>>>> Status of volume: vmstore
>>>> Gluster process TCP Port  RDMA Port
>>>> Online  Pid
>>>>
>>>> --
>>>> Brick *.*.*.1:/gluster_bricks/vmstore/vm

[ovirt-users] Re: Recover accidentally deleted gluster node having bricks ( Engine, Data and Vmstore)

2018-11-27 Thread Abhishek Sahni
Thanks Sahina for your response, I am not able to find it on UI, please
help me to navigate? and yes I am using oVirt 4.2.6.4-1.

On Tue, Nov 27, 2018 at 12:55 PM Sahina Bose  wrote:

>
>
> On Tue, Nov 20, 2018 at 5:56 PM Abhishek Sahni <
> abhishek.sahni1...@gmail.com> wrote:
>
>> Hello Team,
>>
>> We are running a setup of 3-way replica HC gluster setup configured
>> during the initial deployment from the cockpit console using ansible.
>>
>> NODE1
>>   - /dev/sda   (OS)
>>   - /dev/sdb   ( Gluster Bricks )
>>* /gluster_bricks/engine/engine/
>>* /gluster_bricks/data/data/
>>* /gluster_bricks/vmstore/vmstore/
>>
>> NODE2 and NODE3 with a similar setup.
>>
>> There is a mishap that /dev/sdb on NODE2 totally got crashed and now
>> there is nothing inside. However, I have created similar directories after
>> mounting it back i.e.,
>>
>>* /gluster_bricks/engine/engine/
>>* /gluster_bricks/data/data/
>>* /gluster_bricks/vmstore/vmstore/
>> but it is not yet recovered.
>>
>> =
>> [root@node2 ~]# gluster volume status
>> Status of volume: data
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>>
>> --
>> Brick *.*.*.1:/gluster_bricks/data/data  49152 0  Y
>>  1
>> Brick *.*.*.2:/gluster_bricks/data/data  N/A   N/AN
>>  N/A
>> Brick *.*.*.3:/gluster_bricks/data/data  49152 0  Y
>>  4303
>> Self-heal Daemon on localhost   N/A   N/AY
>>  23976
>> Self-heal Daemon on *.*.*.1  N/A   N/AY
>>  27838
>> Self-heal Daemon on *.*.*.3  N/A   N/AY
>>  27424
>>
>> Task Status of Volume data
>>
>> --
>> There are no active volume tasks
>>
>> Status of volume: engine
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>>
>> --
>> Brick *.*.*.1:/gluster_bricks/engine/eng
>> ine 49153 0  Y
>>  7
>> Brick *.*.*.2:/gluster_bricks/engine/eng
>> ine N/A   N/AN
>>  N/A
>> Brick *.*.*.3:/gluster_bricks/engine/eng
>> ine 49153 0  Y
>>  4314
>> Self-heal Daemon on localhost   N/A   N/AY
>>  23976
>> Self-heal Daemon on *.*.*.3  N/A   N/AY
>>  27424
>> Self-heal Daemon on *.*.*.1  N/A   N/AY
>>  27838
>>
>> Task Status of Volume engine
>>
>> --
>> There are no active volume tasks
>>
>> Status of volume: vmstore
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>>
>> --
>> Brick *.*.*.1:/gluster_bricks/vmstore/vm
>> store   49154 0  Y
>>  21603
>> Brick *.*.*.2:/gluster_bricks/vmstore/vm
>> store   N/A   N/AN
>>  N/A
>> Brick *.*.*.3:/gluster_bricks/vmstore/vm
>> store   49154 0  Y
>>  26845
>> Self-heal Daemon on localhost   N/A   N/AY
>>  23976
>> Self-heal Daemon on *.*.*.3  N/A   N/AY
>>  27424
>> Self-heal Daemon on *.*.*.1  N/A   N/AY
>>  27838
>>
>> Task Status of Volume vmstore
>>
>> --
>> There are no active volume tasks
>> =
>>
>>
>> Can someone please suggest the steps to recover the setup?
>>
>> I have tried the below workaround but it doesn't help.
>>
>>
>> https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html
>>
>
>  You can reset the brick - if you're on oVirt 4.2.x, there's a UI option
> in the bricks subtab to do this.
>
>
>>
>> --
>>
>> ABHISHEK SAHNI

[ovirt-users] Recover accidentally deleted gluster node having bricks ( Engine, Data and Vmstore)

2018-11-20 Thread Abhishek Sahni
Hello Team,

We are running a setup of 3-way replica HC gluster setup configured during
the initial deployment from the cockpit console using ansible.

NODE1
  - /dev/sda   (OS)
  - /dev/sdb   ( Gluster Bricks )
   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/

NODE2 and NODE3 with a similar setup.

There is a mishap that /dev/sdb on NODE2 totally got crashed and now there
is nothing inside. However, I have created similar directories after
mounting it back i.e.,

   * /gluster_bricks/engine/engine/
   * /gluster_bricks/data/data/
   * /gluster_bricks/vmstore/vmstore/
but it is not yet recovered.

=
[root@node2 ~]# gluster volume status
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick *.*.*.1:/gluster_bricks/data/data  49152 0  Y   1
Brick *.*.*.2:/gluster_bricks/data/data  N/A   N/AN   N/A
Brick *.*.*.3:/gluster_bricks/data/data  49152 0  Y   4303
Self-heal Daemon on localhost   N/A   N/AY
 23976
Self-heal Daemon on *.*.*.1  N/A   N/AY   27838
Self-heal Daemon on *.*.*.3  N/A   N/AY   27424

Task Status of Volume data
--
There are no active volume tasks

Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick *.*.*.1:/gluster_bricks/engine/eng
ine 49153 0  Y
 7
Brick *.*.*.2:/gluster_bricks/engine/eng
ine N/A   N/AN
 N/A
Brick *.*.*.3:/gluster_bricks/engine/eng
ine 49153 0  Y
 4314
Self-heal Daemon on localhost   N/A   N/AY
 23976
Self-heal Daemon on *.*.*.3  N/A   N/AY   27424
Self-heal Daemon on *.*.*.1  N/A   N/AY   27838

Task Status of Volume engine
--
There are no active volume tasks

Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick *.*.*.1:/gluster_bricks/vmstore/vm
store   49154 0  Y
 21603
Brick *.*.*.2:/gluster_bricks/vmstore/vm
store   N/A   N/AN
 N/A
Brick *.*.*.3:/gluster_bricks/vmstore/vm
store   49154 0  Y
 26845
Self-heal Daemon on localhost   N/A   N/AY
 23976
Self-heal Daemon on *.*.*.3  N/A   N/AY   27424
Self-heal Daemon on *.*.*.1  N/A   N/AY   27838

Task Status of Volume vmstore
--
There are no active volume tasks
=


Can someone please suggest the steps to recover the setup?

I have tried the below workaround but it doesn't help.

https://lists.gluster.org/pipermail/gluster-users/2013-November/015079.html

-- 

ABHISHEK SAHNI
Mob : +91-990-701-5143
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFYUBA4DPHOSAZHGCRV2AAT4JWL4LWWV/


[ovirt-users] Significant increase in memory consumption after upgrade.

2018-11-01 Thread Abhishek Sahni
Hello Team,

I have noticed that agent and broker are continuously failing after upgrade
and found a similar bug which is denoting the same errors.

I have attached the logs and screenshot over the same bug,

- https://bugzilla.redhat.com/show_bug.cgi?id=1639997

Can someone please help me with this? TIA

-- 
Thanks,

Abhishek Sahni
Computer Centre
IISER Bhopal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O2O6D2WM55ADIH753P6JKNKQLMQB6BTV/


[ovirt-users] Getting Errors while installing self host ovirt engine

2018-04-16 Thread Abhishek Sahni
Hello Team,

Given below are the timeout errors while installing self-hosted ovirt
engine, Can someone please confirm if the given two packages are populated
properly over the available repositories,

===
(475/483): xmlrpc-c-client-1.32.5-1905.svn2451.el7.x86_6 |  32 kB   00:00

(476/483): yajl-2.0.4-4.el7.x86_64.rpm   |  39 kB   00:00

(477/483): vhostmd-0.5-12.el7.x86_64.rpm |  44 kB   00:01

(478/483): yum-utils-1.1.31-42.el7.noarch.rpm| 117 kB   00:00

(479/483): vdsm-python-4.20.23-1.el7.centos.noarch.rpm   | 1.2 MB   00:02

(480/483): qemu-kvm-ev-2.9.0-16.el7_4.14.1.x86_64.rpm| 2.9 MB   00:18

(481/483): virt-v2v-1.36.3-6.el7_4.3.x86_64.rpm  |  12 MB   00:03

ansible-2.4.3.0-1.el7.noarch.r FAILED

http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.2/common/
*ansible-2.4.3.0-1.el7.noarch.rpm*: [Errno 12] Timeout on
http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.2/common/ansible-2.4.3.0-1.el7.noarch.rpm:
(28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30
seconds')
Trying other mirror.
openvswitch-2.9.0-3.el7.x86_64 FAILED

http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.2/
*openvswitch-2.9.0-3.el7.x86_64.rpm*: [Errno 12] Timeout on
http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.2/openvswitch-2.9.0-3.el7.x86_64.rpm:
(28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30
seconds')
Trying other mirror.
ansible-2.4.3.0-1.el7.noarch.r FAILED

http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.2/common/ansible-2.4.3.0-1.el7.noarch.rpm:
[Errno 12] Timeout on
http://mirror.centos.org/centos/7/virt/x86_64/ovirt-4.2/common/ansible-2.4.3.0-1.el7.noarch.rpm:
(28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30
seconds')
Trying other mirror.
==

If anything required from my end, please let me know.

Thanks in advance.

-- 

Abhishek Sahni
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users