On Tue, May 21, 2019 at 9:00 PM Adrian Quintero
wrote:
> Sac,
>
> 6.-started the hyperconverged setup wizard and added*
> "gluster_features_force_varlogsizecheck: false"* to the "vars:" section
> on the Generated Ansible inventory :
> */etc/ansible/hc_wizard_inventory.yml* file as it was complai
Awesome, thanks! , and yes I agree, this is a great project!
I will now continue to scale the cluster from 3 to 6 nodes including the
storage...I will let y'all know how it goes and post the steps as I have
only seen examples of 3 hosts but not steps to go from 3 to 6.
regards,
AQ
On Tue, May
> EUREKA: After doing the above I was able to get past the filter issues,
> however I am still concerned if during a reboot the disks might come up
> differently. For example /dev/sdb might come up as /dev/sdx...
Even if they change , you don't have to worry about as each PV contains LVM
metada
Sac,
*To answer some of your questions:*
*fdisk -l:*
[root@host1 ~]# fdisk -l /dev/sdb
Disk /dev/sde: 480.1 GB, 480070426624 bytes, 937637552 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes
Thanks for the clarification.
It seems that my nvme (used by vdo) is not locked.
I will check again before opening a bug.
Best Regards,
Strahil NikolovOn May 21, 2019 09:52, Sahina Bose wrote:
>
>
>
> On Tue, May 21, 2019 at 2:36 AM Strahil Nikolov wrote:
>>
>> Hey Sahina,
>>
>> it seems that al
On Tue, May 21, 2019 at 12:16 PM Sahina Bose wrote:
>
>
> On Mon, May 20, 2019 at 9:55 PM Adrian Quintero
> wrote:
>
>> Sahina,
>> Yesterday I started with a fresh install, I completely wiped clean all
>> the disks, recreated the arrays from within my controller of our DL380 Gen
>> 9's.
>>
>> OS
On Tue, May 21, 2019 at 2:36 AM Strahil Nikolov
wrote:
> Hey Sahina,
>
> it seems that almost all of my devices are locked - just like Fred's.
> What exactly does it mean - I don't have any issues with my bricks/storage
> domains.
>
If the devices show up as locked - it means the disk cannot be
On Mon, May 20, 2019 at 9:55 PM Adrian Quintero
wrote:
> Sahina,
> Yesterday I started with a fresh install, I completely wiped clean all the
> disks, recreated the arrays from within my controller of our DL380 Gen 9's.
>
> OS: RAID 1 (2x600GB HDDs): /dev/sda// Using ovirt node 4.3.3.1 iso.
>
Hi Adrian,
are you using local storage ?
If yes, set a blacklist in multipath.conf (don't forget the "#VDSM PRIVATE"
flag) and rebuild the initramfs and reboot.When multipath locks a path - no
direct access is possible - thus your pvcreate should not be possible.Also ,
multipath is not needed f
Hey Sahina,
it seems that almost all of my devices are locked - just like Fred's.What
exactly does it mean - I don't have any issues with my bricks/storage domains.
Best Regards,Strahil Nikolov
В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose
написа:
To scale existing
Sahina,
Yesterday I started with a fresh install, I completely wiped clean all the
disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda// Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB H
To scale existing volumes - you need to add bricks and run rebalance on the
gluster volume so that data is correctly redistributed as Alex mentioned.
We do support expanding existing volumes as the bug
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand vol
Sahina,
Can someone from your team review the steps done by Adrian?
Thanks,
Freddy
On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero
wrote:
> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
> re-attach them to clear any possible issues and try out the suggestions
> provided.
>
>
Ok, I will remove the extra 3 hosts, rebuild them from scratch and
re-attach them to clear any possible issues and try out the suggestions
provided.
thank you!
On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov
wrote:
> I have the same locks , despite I have blacklisted all local disks:
>
> # VDSM
I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE
blacklist {
devnode "*"
wwid Crucial_CT256MX100SSD1_14390D52DCF5
wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
wwid
nvme.1cc1-324a31
All my hosts have the same locks, so it seems to be OK.
Best Regards,Strahil Nikolov
В четвъртък, 25 април 2019 г., 8:28:31 ч. Гринуич-4, Adrian Quintero
написа:
under Compute, hosts, select the host that has the locks on /dev/sdb,
/dev/sdc, etc.., select storage devices and in here
You don't create the brick on the /dev/sd* device
You can see where i create the brick on highlighted multipath device
(see attachment), if for some reason you can't do that, you might need
to run wipefs -a on it as it probably has some leftover headers from
another FS
On 2019-04-25 08:53, Adri
I understand, however the "create brick" option is greyed out (not
enabled), the only way I could get that option to be enabled is if I
manually edit the multipathd.conf file and add
-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gl
You create the brick on top of the multipath device. Look for one that
is the same size as the /dev/sd* device that you want to use.
On 2019-04-25 08:00, Strahil Nikolov wrote:
> In which menu do you see it this way ?
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 24 април 2019 г., 8:55:2
under Compute, hosts, select the host that has the locks on /dev/sdb,
/dev/sdc, etc.., select storage devices and in here is where you see a
small column with a bunch of lock images showing for each row.
However as a work around, on the newly added hosts (3 total), I had to
manually modify /etc/m
In which menu do you see it this way ?
Best Regards,Strahil Nikolov
В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero
написа:
Strahil,this is the issue I am seeing now
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the
Strahil,
this is the issue I am seeing now
[image: image.png]
The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that
have after server reboots?
thanks,
On Mon, Apr 22, 2019 at 11:39 PM Strahil wrote:
> I have edited my
Hi,
I am not sure if I understood your question, but here is a statement from
the install guide of RHHI (Deploying RHHI) :
"You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans
across more than 3 nodes at a time."
Page 11 , 2.7 Scaling.
Regards.
Use the created multipath devices___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/co
I have edited my multipath.conf to exclude local disks , but you need to set
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with
any linux.
Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint..
Thanks Alex, that makes more sense now while trying to follow the instructions
provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd are locked and
inidicating " multpath_member" hence not letting me create new bricks. And on
the logs I see
Device /dev/sdb excluded by a filter.\n", "i
On 2019-04-22 17:33, adrianquint...@gmail.com wrote:
Found the following and answered part of my own questions, however I
think this sets a new set of Replica 3 Bricks, so if I have 2 hosts
fail from the first 3 hosts then I loose my hyperconverged?
https://access.redhat.com/documentation/en-us/
On 2019-04-22 14:48, adrianquint...@gmail.com wrote:
Hello,
I have a 3 node Hyperconverged setup with gluster and added 3 new
nodes to the cluster for a total of 6 servers.
I am now taking advantage of more compute power but cant scale out my
storage volumes.
Current Hyperconverged setup:
- hos
Found the following and answered part of my own questions, however I think this
sets a new set of Replica 3 Bricks, so if I have 2 hosts fail from the first 3
hosts then I loose my hyperconverged?
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualizat
29 matches
Mail list logo