Hi All,

Is there any one have any idea about this problem...it seems it's bug
either in Ovirt or Glusterfs...that's why no one has the idea about
it....please correct me if i am wrong....

Thanks,
Punit

On Wed, Mar 18, 2015 at 5:05 PM, Punit Dambiwal <[email protected]> wrote:

> Hi Michal,
>
> Would you mind to let me know the possible messedup things...i will check
> and try to resolve it....still i am communicating gluster community to
> resolve this issue...
>
> But in the ovirt....gluster setup is quite straight....so how come it will
> be messedup with reboot ?? if it can be messedup with reboot then it seems
> not good and stable technology for the production storage....
>
> Thanks,
> Punit
>
> On Wed, Mar 18, 2015 at 3:51 PM, Michal Skrivanek <
> [email protected]> wrote:
>
>>
>> On Mar 18, 2015, at 03:33 , Punit Dambiwal <[email protected]> wrote:
>>
>> > Hi,
>> >
>> > Is there any one from community can help me to solve this issue...??
>> >
>> > Thanks,
>> > Punit
>> >
>> > On Tue, Mar 17, 2015 at 12:52 PM, Punit Dambiwal <[email protected]>
>> wrote:
>> > Hi,
>> >
>> > I am facing one strange issue with ovirt/glusterfs....still didn't find
>> this issue is related with glusterfs or Ovirt....
>> >
>> > Ovirt :- 3.5.1
>> > Glusterfs :- 3.6.1
>> > Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
>> > Guest VM :- more then 100
>> >
>> > Issue :- When i deploy this cluster first time..it work well for me(all
>> the guest VM created and running successfully)....but suddenly one day my
>> one of the host node rebooted and none of the VM can boot up now...and
>> failed with the following error "Bad Volume Specification"
>> >
>> > VMId :- d877313c18d9783ca09b62acf5588048
>> >
>> > VDSM Logs :- http://ur1.ca/jxabi
>>
>> you've got timeouts while accessing storage…so I guess something got
>> messed up on reboot, it may also be just a gluster misconfiguration…
>>
>> > Engine Logs :- http://ur1.ca/jxabv
>> >
>> > ------------------------
>> > [root@cpu01 ~]# vdsClient -s 0 getVolumeInfo
>> e732a82f-bae9-4368-8b98-dedc1c3814de 00000002-0002-0002-0002-000000000145
>> 6d123509-6867-45cf-83a2-6d679b77d3c5 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
>> >         status = OK
>> >         domain = e732a82f-bae9-4368-8b98-dedc1c3814de
>> >         capacity = 21474836480
>> >         voltype = LEAF
>> >         description =
>> >         parent = 00000000-0000-0000-0000-000000000000
>> >         format = RAW
>> >         image = 6d123509-6867-45cf-83a2-6d679b77d3c5
>> >         uuid = 9030bb43-6bc9-462f-a1b9-f6d5a02fb180
>> >         disktype = 2
>> >         legality = LEGAL
>> >         mtime = 0
>> >         apparentsize = 21474836480
>> >         truesize = 4562972672
>> >         type = SPARSE
>> >         children = []
>> >         pool =
>> >         ctime = 1422676305
>> > ---------------------
>> >
>> > I opened same thread earlier but didn't get any perfect answers to
>> solve this issue..so i reopen it...
>> >
>> > https://www.mail-archive.com/[email protected]/msg25011.html
>> >
>> > Thanks,
>> > Punit
>> >
>> >
>> >
>>
>>
>
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to