Hi All,

So after 24 hours I hit this:

Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The
last packet successfully received from the server was 86,290,759
milliseconds ago.  The last packet sent successf
ully to the server was 86,290,760 milliseconds ago. is longer than the
server configured value of 'wait_timeout'. You should consider either
expiring and/or testing connection validity
before use in your application, increasing the server configured values for
client timeouts, or using the Connector/J connection property
'autoReconnect=true' to avoid this problem.

I think I hit a cloudstack timeout of 24 hours somewhere, I need to check
which one I might have hit. But I hit the above error as it was cleaning
up, so even if the copy had of worked, it looks it would have failed anyway
when it tried to update the db.

This looks like a jdbc setting to me, or perhaps a mysql setting? How can I
increase this setting?

Cheers!


On Thu, Aug 25, 2016 at 7:41 AM, cs user <[email protected]> wrote:

> Thanks Makrand, I will take a look at these values, much appreciated!
>
> On Thu, Aug 25, 2016 at 7:36 AM, Makrand <[email protected]> wrote:
>
>> Hi,
>>
>> Timeouts while moving big volumes on top of XENServer is pretty common
>> thing. Last time for bigger volumes we had and issue and following global
>> settings were changed. (Just added one  extra zero at end of each
>> settings.
>> this was on XEN 6.2 and ACS4.4.2)
>>
>> migratewait: 3600
>> storage.pool.max.waitseconds: 3600
>> vm.op.cancel.interval: 3600
>> vm.op.cleanup.wait: 3600
>> wait:1800
>> vm.tranisition.wait.interval:3600
>>
>> --
>> Makrand
>>
>>
>> On Thu, Aug 25, 2016 at 11:56 AM, cs user <[email protected]> wrote:
>>
>> > Hi Makrand,
>> >
>> > Thanks for responding!
>> >
>> > Yes this does work for us, for small disks. Even with small disks
>> however
>> > it seems to take a while, 5 mins or so, on a pretty fast 10GB network.
>> >
>> > However currently I am trying to move 2TB disks which is taking a while,
>> > and was hitting a 3 hour time limit which seemed to be a cloudstack
>> default
>> > for cloning disks.
>> >
>> > Migrations within the cluster, as you say, seem to use Storage Xen
>> motion
>> > and these work fine with small disks. Haven't really tried huge ones
>> with
>> > that.
>> >
>> > I think you are right that it is using the the SSVM, despite the fact I
>> > can't seem to see much activity when the copy is occurring.
>> >
>> > Thanks!
>> >
>> >
>> >
>> > On Wed, Aug 24, 2016 at 7:36 PM, Makrand <[email protected]>
>> wrote:
>> >
>> > > Hi,
>> > >
>> > > I think you must be seeing the option like (storage migration
>> required)
>> > > while you move the volumes between primary storage. I've seen in past
>> > > people complaining about this option not working (using GUI or API)
>> with
>> > > setup similar as yours. Did you get this working?
>> > >
>> > > Anyways, I think it has to be system VM, coz primary storage A have
>> not
>> > > Idea about primary storage B via hypervisor, only cloudstack ( (SSVM)
>> can
>> > > see it as part as one cloud zone.
>> > >
>> > > In normal case of moving volume within cluster, Storge XEN motion is
>> what
>> > > it uses.
>> > >
>> > >
>> > >
>> > > --
>> > > Makrand
>> > >
>> > >
>> > > On Wed, Aug 24, 2016 at 7:50 PM, cs user <[email protected]>
>> wrote:
>> > >
>> > > > Hi All,
>> > > >
>> > > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
>> > > >
>> > > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
>> > primary
>> > > > storage.
>> > > >
>> > > > If I migrate a volume from one primary storage to the other one,
>> using
>> > > > cloudstack, what aspect of the environment is responsible for this
>> > copy?
>> > > >
>> > > > I'm trying to identify bottlenecks but I can't see what is
>> responsible
>> > > for
>> > > > this copying. Is it is the xen hosts themselves or the secondary
>> > storage
>> > > > vm?
>> > > >
>> > > > Thanks!
>> > > >
>> > >
>> >
>>
>
>

Reply via email to