Hi,
On Wed, Feb 15, 2017 at 2:30 PM, Ramesh Nachimuthu
wrote:
>
> + Sac,
>
>
> - Original Message -
> > From: "Sandro Bonazzola"
> > To: "Ishmael Tsoaela" , "Ramesh Nachimuthu" <
> rnach...@redhat.com>
> > Cc: "users"
Hi,
On Thu, May 18, 2017 at 7:08 PM, Sahina Bose wrote:
>
>
> On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo
> wrote:
>
>> Well, I tried both of the following:
>> 1. Having only a boot partition and a PV for the OS that does not take
>> up the entire
Hi,
On Thu, Feb 7, 2019 at 9:27 AM Sahina Bose wrote:
> +Sachidananda URS to review user request about systemd mount files
>
> On Tue, Feb 5, 2019 at 10:22 PM feral wrote:
> >
> > Using SystemD makes way more sense to me. I was just trying to use
> ovirt-node as it
On Thu, Jan 31, 2019 at 12:48 PM Strahil Nikolov
wrote:
> Hi All,
>
> I have managed to fix this by reinstalling gdeploy package. Yet, it still
> asks for "Disckount" section - but as the fix was not rolled for CentOS yet
> - this is expected.
>
Till the CentOS team includes the package, you
On Thu, Jan 31, 2019 at 8:01 AM Strahil Nikolov
wrote:
> Hey Guys/Gals,
>
> did you update the gdeploy for CentOS ?
>
gdeploy is updated for Fedora, for CentOS the packages will be updated
shortly, we are testing the packages.
However, this issue you are facing where RAID is selected over JBOD
Hi David,
On Mon, Jan 28, 2019 at 5:01 PM Gobinda Das wrote:
> Hi David,
> Thanks!
> Adding sac to check if we are missing anything for gdeploy.
>
> On Mon, Jan 28, 2019 at 4:33 PM Leo David wrote:
>
>> Hi Gobinda,
>> gdeploy --version
>> gdeploy 2.0.2
>>
>> yum list installed | grep gdeploy
Hi Stephen,
On Mon, Jun 3, 2019 at 3:57 PM wrote:
> Good Morning,
>
> I'm completely new to this and I'm testing setting up a Gluster
> environment with Ovirt. However, my deployment keeps fails and I don't
> understand what it means. Any assistance would be much appreciated. Please
> see
On Mon, May 27, 2019 at 9:41 AM wrote:
> I made them manually. First created the LVM drives, then the VDO devices,
> then gluster volumes
>
In that case you must add these mount options (
inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service)
manually
On Thu, Jun 13, 2019 at 7:11 AM wrote:
> While trying to do a hyperconverged setup and trying to use "configure LV
> Cache" /dev/sdf the deployment fails. If I dont use the LV cache SSD Disk
> the setup succeds, thought you mighg want to know, for now I retested with
> 4.3.3 and all worked fine,
On Wed, May 22, 2019 at 11:26 AM Sahina Bose wrote:
> +Sachidananda URS
>
> On Wed, May 22, 2019 at 1:14 AM wrote:
>
>> I'm sorry, i'm still working on my linux knowledge, here is the output of
>> my blkid on one of the servers:
>>
>> /dev/nvme0n1: PTTYP
On Tue, May 21, 2019 at 9:00 PM Adrian Quintero
wrote:
> Sac,
>
> 6.-started the hyperconverged setup wizard and added*
> "gluster_features_force_varlogsizecheck: false"* to the "vars:" section
> on the Generated Ansible inventory :
> */etc/ansible/hc_wizard_inventory.yml* file as it was
On Mon, May 20, 2019 at 11:58 AM Sahina Bose wrote:
> Adding Sachi
>
> On Thu, May 9, 2019 at 2:01 AM wrote:
>
>> This only started to happen with oVirt node 4.3, 4.2 didn't have issue.
>> Since I updated to 4.3, every reboot the host goes into emergency mode.
>> First few times this happened I
ume '/dev/sdc' failed",
>> "rc": 5}
>> failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd',
>> u'pvname': u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd
>> excluded by a filter.\n", "item":
Hi,
On Wed, Jul 3, 2019 at 3:59 PM PS Kazi wrote:
> hi,
> I am using HDD : Toshiba 7200 RPM, Data transfer Rate 150MB/s, Interface
> 6Gb/s.
> But Hyper-converged configuration stopped with error msg: Disk latency is
> very high, time taken to copy 1M file is > 10s
> Please help me to stop
Hi,
On Wed, Jul 3, 2019 at 6:22 PM Sachidananda URS wrote:
> Hi,
>
>
> On Wed, Jul 3, 2019 at 3:59 PM PS Kazi wrote:
>
>> hi,
>> I am using HDD : Toshiba 7200 RPM, Data transfer Rate 150MB/s, Interface
>> 6Gb/s.
>> But Hyper-converged configuratio
On Wed, Sep 4, 2019 at 9:27 PM wrote:
> I am seeing more success than failures at creating single and triple node
> hyperconverged setups after some weeks of experimentation so I am branching
> out to additional features: In this case the ability to use SSDs as cache
> media for hard disks.
>
>
16 matches
Mail list logo