Thanks George for your help. I tried migrate another VM, it finally
successfully. I guess the root cause is there is an ISO attached to the failure
VM, which is a remote CIFS file. Nothing attached for the VM migrate
successfully. I will check the log as you said when this kind of issue happen
Okay, so I have two Dell PowerEdge server and we are using with a NAS solution
called TrueNAS by IX Systems. This is basically a ZFS version 28
implementation for the NAS solution.
Anyway, when we mount our NFS storage repositories over our 10GB interfaces on
the Dell PowerEdge server and the
On 3 Jan 2013, at 03:19, George Shuklin wrote:
Still not clear how to migrate VM's in the following cases:
- When VM is not online (vm-migrate insists it should be "running").
- When origin and destination hosts are within the same pool.
offline croos-pool migration is not supported, AFAIK.
El 02/01/13 21:19, George Shuklin escribió:
live migration performs set of iterations over VM memory (you can see
details in /tmp/xenguest.log on source host), live=false performs 'pause
and copy' mode. They are almost the same for idle machines, but make a
huge difference for heavy-loaded server
Hello.
El 03/01/13 06:11, Rob Hoes escribió:
It should be possible to create bonds of 4 NICs in the newest
version of XenCenter (the one that comes with XenServer 6.1).
Are you using that version?
No, was using XenCenter 6.0. Already upgraded to 6.1.
It allowed me to select up to 4 physical in
Hello.
You hijacked my thread!
El 03/01/13 12:10, ad...@xenhive.com escribió:
What is the right way to clean up those temp VDI? Should I use
vdi-forget or vdi-destroy?
That depends of what you want, they behave differently.
vdi-forget leaves the storage (file or logical volume) in it's place
I'm trying to get all difference between XCP 1.1 and 1.6 here
http://wiki.xen.org/wiki/XCP_1.6_changes_compare_to_1.1
If someone got noticed something odd, please add.
On 04.01.2013 00:04, Rushikesh Jadhav wrote:
The VM was imported so I did another import of same xva.
After internal/external
The VM was imported so I did another import of same xva.
After internal/external shutdown/reboot the other-config field is not
updated. It just stays exact same.
Thanks George for noticing.
I don't test debian-xapi, could you please check if its same there ? From
Dave's reply I assume that its sa
First of all, I want to say I much I love the Storage XenMotion feature
in XCP 1.6. I moved the storage for several VMs from one storage
repository to another. Overall, things went really well. One of the VMs
is having a problem, though.
The storage migration failed for that VM on the first
Hello,
I use this hardware configuration:
ProLiant BL465c G7 with embedded Emulex NC551i dual-port 10 Gb Converged
Network Adapter
Two HP VirtualConnect Flex-10 Enet Module
Cisco C2960G
All firmwares are in current versions. Emulex NC551i using FW version
4.1.450.7 corresponding with be2net
Really strange.
But I see last_shutdown_time: 20120406T12:57:50Z
2012-04-06... Very old shutdown. Or, may be, just upgraded pool? Please,
try to reboot any VM and see if date will change.
PS PLATFORM_VERSION='1.6.8' <- not a release version of XCP...
03.01.2013 19:59, Rushikesh Jadhav пишет:
Hi George and Dave,
Im not sure how but I do have all the parameters in VMs other-config
[root@s3 ~]# xe vm-param-get param-name=other-config
uuid=1b5d4d31-e8ff-0e1a-bfff-ad2d8a118490
auto_poweron: true; vgpu_pci: ; import_task:
OpaqueRef:3c6d447f-66d9-64d2-915f-41e6d177090b; mac_seed:
66442023-d
Yes, if possible. Those fields was very useful.
Some use cases:
1) They can be used to detect crashed VM's. If SR is going offline and
VM mount options says errors=panic, VM is crashing. Those fields allows
very easy to search for crashed VM and restart them after SR is repaired.
2) Status 'cr
It's not the memory, actually, but maxmem value (just an internal Xen
limit for domain). That leak do not cause any direct damage (like 'out
of memory' condition for host), but just raise a theoretical memory
limit for domain, so domain can (if want) can ask for more, more and
more memory beyon
Hi George,
They weren't deliberately removed -- they were missed when we split xenopsd
from xapi. I think they could probably be put back again. Perhaps we should
make them first-class fields rather than other-config keys, to make sure they
don't go missing again?
Out of curiousity, what kind
03.01.2013 19:07, jeromemaloberti пишет:
May be just restore original value for domain after successfull
migration? That change for new domain caused by xapi or by xenguest ?
It is the same the same issue. Some additional memory is allocated for
the migration, but it is never reclaimed. The s
Good day.
Found (suddenly) that XCP 1.6 did not provides mertrics about shutdown
time, reason and initiator. Those was extremely useful metrics and it's
really sad to see lack of them in new version.
In XCP 1.1 they was placed in other-config:
last_shutdown_time: 20130102T15:56:58Z; last_shut
Dave,
On 3 Jan 2013, at 12:05, Dave Scott wrote:
> Hi,
>
> That's very strange. Could it be the size of the "VM.get_all_records"
> response is too big to be sent properly? Do other APIs which generate less
> traffic (e.g. "Pool.get_all_records") always work, or do they sometimes fail
> too?
The master must always be involved to set things up, because the master is
responsible for forwarding API calls to slaves in the pool, and the master runs
the database with all the VM metadata. The actual migration will then happen
between the sender and the receiver directly.
On 3 Jan 2013, at
Is that right way to migrate?
I mean, if master had no access to designation SR, how migration will works?
03.01.2013 16:36, Rob Hoes пишет:
On 3 Jan 2013, at 03:19, George Shuklin wrote:
Still not clear how to migrate VM's in the following cases:
- When VM is not online (vm-migrate insists i
On 3 Jan 2013, at 03:19, George Shuklin wrote:
>> Still not clear how to migrate VM's in the following cases:
>> - When VM is not online (vm-migrate insists it should be "running").
>> - When origin and destination hosts are within the same pool.
>>
> offline croos-pool migration is not supported
Hi,
6.1 does not have the latest drivers - The Citrix update package has much more
recent drivers.
The card is on the HCL list.
Looking at the router its plugged into I don't get any traffic at all coming
from the server. I have configured the VLANs on the server using xencenter 6.1
as we hav
Hi George,
You are certainly right about this – this is quite bad.
I saw your pull request on github and will try it out.
Thanks for reporting this and submitting a fix as well!
Cheers,
Rob
On 18 Dec 2012, at 14:24, George Shuklin wrote:
> I found some kind of horrible bug in XCP 1.6.
>
> Af
Hi Mikael,
XCP 1.6 should have the same drivers as XenServer 6.1, so you should already
have the latest ones.
What does your setup look like? How did you configure the VLANs on the XCP
host? And did you configure your (physical) switch port as a trunk port with
access to the VLANs you need?
C
On Thu, 3 Jan 2013 16:44:14 +0550, xen-api-boun...@lists.xen.org wrote:
> All,
>
> I have been playing with Xen Cloud Platform 1.6 in the lab and have been
> noticing some strange behaviour when using the API to drive it. Using
> XenCenter worked some of the time but then appeared to freeze, no
Hi Alexandre,
It should be possible to create bonds of 4 NICs in the newest version of
XenCenter (the one that comes with XenServer 6.1). Are you using that version?
The CLI does not have a limit on the number of NICs in a bond, so you can
indeed use that if XenCenter does not work.
Cheers,
Ro
Hi,
That's very strange. Could it be the size of the "VM.get_all_records" response
is too big to be sent properly? Do other APIs which generate less traffic (e.g.
"Pool.get_all_records") always work, or do they sometimes fail too?
Perhaps you've got a thread or fd leak? If you run "top" and loo
All,
I have been playing with Xen Cloud Platform 1.6 in the lab and have been
noticing some strange behaviour when using the API to drive it. Using XenCenter
worked some of the time but then appeared to freeze, not refresh and then on
re-connecting get stuck on "Synchronising". I put this down
28 matches
Mail list logo