On Wed, Mar 2, 2016 at 6:41 PM, p...@email.cz wrote:
> Hi, next explanation
> VDSM log give me following message
> Is this live checking for storage availability ? If "SUCCESS" then why
> " " ???
>
- SUCCESS means the command was successful
- is what the command wrote to
On Wed, Mar 2, 2016 at 7:48 PM, p...@email.cz wrote:
> UPDATE:
>
> all "ids" file have permittion fixed to 660 now
>
> # find /STORAGES -name ids -exec ls -l {} \;
> -rw-rw 2 vdsm kvm 0 24. úno 07.41
> /STORAGES/g1r5p1/GFS/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/ids
>
UPDATE:
all "ids" file have permittion fixed to 660 now
# find /STORAGES -name ids -exec ls -l {} \;
-rw-rw 2 vdsm kvm 0 24. úno 07.41
/STORAGES/g1r5p1/GFS/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/ids
-rw-rw 2 vdsm kvm 0 24. úno 07.43
Hi,
in general, programs use stderr (which you see after ) to output
miscellaneous information. You can see here that "dd" is an example of
this behaviour. While it exited successfully, it also wrote
non-essential information to its secondary output. You do not see stdout
(the primary
Hi their
I have currently a strange problem on a new oVirt 3.6 installation. At
the moment a clean shutdown doesn't work, most of the time he reboots
the system or hangs in the shutdown progress.
I discovered this, when i tested our multiple UPS solution and send some
test signals over ipmi to
Hi, next explanation
VDSM log give me following message
Is this live checking for storage availability ? If "SUCCESS" then why
" " ???
regs.
Pavel
Thread-233::DEBUG::2016-03-02
17:31:55,275::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
= '0+1 records in\n0+1 records out\n346
I usually deploy with only a basic KS that places an ssh cert on the host.
Then ansible adds the repos and adjusts the network and iscsi initiator ,
the ovirt bootstrap takes care of the rest
On Mar 2, 2016 5:42 PM, "Duckworth, Douglas C" wrote:
> Never mind. This seems to be
"Do you mean not using VIR_DOMAIN_SNAPSHOT_CREATE_LIVE?"
Unsure. In engine.log I see the attached
--
Thanks
Douglas Duckworth, MSc, LFCS
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: du...@tulane.edu
O: 504-988-9341
F: 504-988-8505
On
Does anyone have a kickstart available to share? We are looking to
automate hypervisor deployment with Cobbler.
Thanks
Doug
--
Thanks
Douglas Duckworth, MSc, LFCS
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112
E: du...@tulane.edu
O: 504-988-9341
F:
It is my understanding that VMs must pause during Live Snapshot creation
when "Save Memory" option is selected.
If you're only seeking to mitigate against issues that might arise after
doing package updates then what's the advantage of going with that route?
Down the road does anyone know if it
On Wed, Mar 2, 2016 at 11:04 PM, Duckworth, Douglas C
wrote:
> It is my understanding that VMs must pause during Live Snapshot creation
> when "Save Memory" option is selected.
>
> If you're only seeking to mitigate against issues that might arise after
> doing package updates
Never mind. This seems to be a place to start:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Installation_Guide/sect-Automated_Installation.html
https://access.redhat.com/solutions/41697
--
Thanks
Douglas Duckworth, MSc, LFCS
Unix Administrator
On 03/03/2016 12:43 AM, Nir Soffer wrote:
PS: # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can
On Thu, Mar 3, 2016 at 2:54 AM, David LeVene
wrote:
> Hi,
>
> Thanks for the quick responses & help.. answers in-line at the end of this
> email.
>
> Cheers
> David
>
> -Original Message-
> From: Edward Haas [mailto:edwa...@redhat.com]
> Sent: Wednesday,
On Thu, Mar 03, 2016 at 12:54:25AM +, David LeVene wrote:
>
> Can you check our patches? They should resolve the problem we saw in the
> log: https://gerrit.ovirt.org/#/c/54237 (based on oVirt-3.6.3)
>
> -- I've manually applied the patch to the node that I was testing on
> and the
On Thu, Mar 3, 2016 at 9:06 AM, Dan Kenigsberg wrote:
>
> On Thu, Mar 03, 2016 at 12:54:25AM +, David LeVene wrote:
> >
> > Can you check our patches? They should resolve the problem we saw in the
> > log: https://gerrit.ovirt.org/#/c/54237 (based on oVirt-3.6.3)
> >
> >
Hi guys,
thx a lot for your support ...at first.
Because we had been under huge time pressure, we found "google
workaround" which delete both files . It helped, probabbly at first
steps of recover .
eg: " # find /STORAGES/g1r5p5/GFS/ -samefile
Yes we have had "ids" split brains + some other VM's files
Split brains was fixed by healing with preffered ( source ) brick.
eg: " # gluster volume heal 1KVM12-P1 split-brain source-brick
16.0.0.161:/STORAGES/g1r5p1/GFS "
Pavel
Okay, so what I understand from the output above is you have
On 03/02/2016 01:36 AM, David LeVene wrote:
> Hi Dan,
>
> I missed the email as the subject line changed!
>
> So we use and run IPv6 in our network - not sure if this is related. The
> Addresses are handed out via SLAAC so that would be where the IPv6 address is
> coming from.
>
> My memory
On 03/02/2016 01:36 AM, David LeVene wrote:
> Hi Dan,
>
> I missed the email as the subject line changed!
>
> So we use and run IPv6 in our network - not sure if this is related. The
> Addresses are handed out via SLAAC so that would be where the IPv6 address is
> coming from.
>
> My memory
Hi,
We've migrated storage from glusterfs to iSCSI, so now we have 2
storages in our data center. As we've already finished, we want to
remove the gluster storage from our data center (which is the master
storage right now).
We've tried to put it on maintenance but we're getting this error:
21 matches
Mail list logo