Re: [ovirt-users] gdeploy error

2017-02-15 Thread Sachidananda URS
Hi, On Wed, Feb 15, 2017 at 2:30 PM, Ramesh Nachimuthu wrote: > > + Sac, > > > - Original Message - > > From: "Sandro Bonazzola" > > To: "Ishmael Tsoaela" , "Ramesh Nachimuthu" < > rnach...@redhat.com> > > Cc: "users"

Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Sachidananda URS
Hi, On Thu, May 18, 2017 at 7:08 PM, Sahina Bose wrote: > > > On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo > wrote: > >> Well, I tried both of the following: >> 1. Having only a boot partition and a PV for the OS that does not take >> up the entire

[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-06 Thread Sachidananda URS
Hi, On Thu, Feb 7, 2019 at 9:27 AM Sahina Bose wrote: > +Sachidananda URS to review user request about systemd mount files > > On Tue, Feb 5, 2019 at 10:22 PM feral wrote: > > > > Using SystemD makes way more sense to me. I was just trying to use > ovirt-node as it

[ovirt-users] Re: Deploying single instance - error

2019-01-30 Thread Sachidananda URS
On Thu, Jan 31, 2019 at 12:48 PM Strahil Nikolov wrote: > Hi All, > > I have managed to fix this by reinstalling gdeploy package. Yet, it still > asks for "Disckount" section - but as the fix was not rolled for CentOS yet > - this is expected. > Till the CentOS team includes the package, you

[ovirt-users] Re: Deploying single instance - error

2019-01-30 Thread Sachidananda URS
On Thu, Jan 31, 2019 at 8:01 AM Strahil Nikolov wrote: > Hey Guys/Gals, > > did you update the gdeploy for CentOS ? > gdeploy is updated for Fedora, for CentOS the packages will be updated shortly, we are testing the packages. However, this issue you are facing where RAID is selected over JBOD

[ovirt-users] Re: Deploying single instance - error

2019-01-28 Thread Sachidananda URS
Hi David, On Mon, Jan 28, 2019 at 5:01 PM Gobinda Das wrote: > Hi David, > Thanks! > Adding sac to check if we are missing anything for gdeploy. > > On Mon, Jan 28, 2019 at 4:33 PM Leo David wrote: > >> Hi Gobinda, >> gdeploy --version >> gdeploy 2.0.2 >> >> yum list installed | grep gdeploy

[ovirt-users] Re: Gluster Deployment Failed - No Medium Found

2019-06-03 Thread Sachidananda URS
Hi Stephen, On Mon, Jun 3, 2019 at 3:57 PM wrote: > Good Morning, > > I'm completely new to this and I'm testing setting up a Gluster > environment with Ovirt. However, my deployment keeps fails and I don't > understand what it means. Any assistance would be much appreciated. Please > see

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-27 Thread Sachidananda URS
On Mon, May 27, 2019 at 9:41 AM wrote: > I made them manually. First created the LVM drives, then the VDO devices, > then gluster volumes > In that case you must add these mount options ( inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service) manually

[ovirt-users] Re: 4.3.4 caching disk error during hyperconverged deployment

2019-06-17 Thread Sachidananda URS
On Thu, Jun 13, 2019 at 7:11 AM wrote: > While trying to do a hyperconverged setup and trying to use "configure LV > Cache" /dev/sdf the deployment fails. If I dont use the LV cache SSD Disk > the setup succeds, thought you mighg want to know, for now I retested with > 4.3.3 and all worked fine,

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-22 Thread Sachidananda URS
On Wed, May 22, 2019 at 11:26 AM Sahina Bose wrote: > +Sachidananda URS > > On Wed, May 22, 2019 at 1:14 AM wrote: > >> I'm sorry, i'm still working on my linux knowledge, here is the output of >> my blkid on one of the servers: >> >> /dev/nvme0n1: PTTYP

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sachidananda URS
On Tue, May 21, 2019 at 9:00 PM Adrian Quintero wrote: > Sac, > > 6.-started the hyperconverged setup wizard and added* > "gluster_features_force_varlogsizecheck: false"* to the "vars:" section > on the Generated Ansible inventory : > */etc/ansible/hc_wizard_inventory.yml* file as it was

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-20 Thread Sachidananda URS
On Mon, May 20, 2019 at 11:58 AM Sahina Bose wrote: > Adding Sachi > > On Thu, May 9, 2019 at 2:01 AM wrote: > >> This only started to happen with oVirt node 4.3, 4.2 didn't have issue. >> Since I updated to 4.3, every reboot the host goes into emergency mode. >> First few times this happened I

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-21 Thread Sachidananda URS
ume '/dev/sdc' failed", >> "rc": 5} >> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', >> u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd >> excluded by a filter.\n", "item":

[ovirt-users] Re: Disk latency is very high, time taken to copy 1M file is > 10s

2019-07-03 Thread Sachidananda URS
Hi, On Wed, Jul 3, 2019 at 3:59 PM PS Kazi wrote: > hi, > I am using HDD : Toshiba 7200 RPM, Data transfer Rate 150MB/s, Interface > 6Gb/s. > But Hyper-converged configuration stopped with error msg: Disk latency is > very high, time taken to copy 1M file is > 10s > Please help me to stop

[ovirt-users] Re: Disk latency is very high, time taken to copy 1M file is > 10s

2019-07-03 Thread Sachidananda URS
Hi, On Wed, Jul 3, 2019 at 6:22 PM Sachidananda URS wrote: > Hi, > > > On Wed, Jul 3, 2019 at 3:59 PM PS Kazi wrote: > >> hi, >> I am using HDD : Toshiba 7200 RPM, Data transfer Rate 150MB/s, Interface >> 6Gb/s. >> But Hyper-converged configuratio

[ovirt-users] Re: hyperconverged single node with SSD cache fails gluster creation

2019-09-04 Thread Sachidananda URS
On Wed, Sep 4, 2019 at 9:27 PM wrote: > I am seeing more success than failures at creating single and triple node > hyperconverged setups after some weeks of experimentation so I am branching > out to additional features: In this case the ability to use SSDs as cache > media for hard disks. > >