I am trying all of the patching.  The strange part is these patches dont show 
in XenCenter even though they are valid and needed.

I did find this interesting little snippet in the directions that may give me 
some hint as to why this is not working:

For the separate storage network to work correctly, it must be the only 
interface that can ping the primary storage device’s IP address. For example, 
if eth0 is the management network NIC, ping -I eth0 <primary storage device IP> 
must fail. In all deployments, secondary storage devices must be pingable from 
the management network NIC or bond. If a secondary storage device has been 
placed on the storage network, it must also be pingable via the storage network 
NIC or bond on the hosts as well. 

This may just be referring to the fact that you would be multihoming if you 
have multiple interfaces that can reach this network, thus taking out any 
advantage of a primary network for storage.

I will reply back with what I find.

William Clark

On Jul 13, 2012, at 4:15 AM, James Kahn wrote:

> Maybe you are being hit by a XenServer bug? Look at issues resolved by
> this hot fix: http://support.citrix.com/article/CTX133812
> 
> 
> 
> -----Original Message-----
> From: William Clark <majorgearh...@gmail.com>
> Reply-To: "cloudstack-users@incubator.apache.org"
> <cloudstack-users@incubator.apache.org>
> Date: Friday, 13 July 2012 1:44 PM
> To: "cloudstack-users@incubator.apache.org"
> <cloudstack-users@incubator.apache.org>
> Cc: "cloudstack-users@incubator.apache.org"
> <cloudstack-users@incubator.apache.org>
> Subject: Re: Timeout issue when adding ISO's to Cloudstack
> 
>> Healthcheck came back good, but I see the issue. I have 2 interfaces and
>> the public one is tagged with multiple vlans. On the ssvm I see a static
>> route for my secondary storage NAS device, but I cannot ping the DG for
>> this interface no matter what I do on the cloudstack side. I have tried
>> setting the vlan, removing it and no luck. I have verified that outside
>> of this vm it does ping even from the hypervisor that it is on. Does
>> anyone know if there are tagging issues with the current cloudstack?
>> 
>> Bill Clark
>> Sent from my iPhone
>> 
>> On Jul 12, 2012, at 5:07 PM, Nitin Mehta <nitin.me...@citrix.com> wrote:
>> 
>>> Yeah, that can be the issue.
>>> Try doing ssvm health check - step 2 from
>>> http://wiki.cloudstack.org/pages/viewpage.action?pageId=9601278&focusedCo
>>> mmentId=10747987#comment-10747987
>>> 
>>> -----Original Message-----
>>> From: Caleb Call [mailto:calebc...@me.com]
>>> Sent: Thursday, July 12, 2012 4:03 PM
>>> To: cloudstack-users@incubator.apache.org
>>> Subject: Re: Timeout issue when adding ISO's to Cloudstack
>>> 
>>> Have you verified network connectivity on the SSVM?
>>> 
>>> 
>>> On Jul 12, 2012, at 4:29 PM, William Clark wrote:
>>> 
>>>> Env:
>>>> Host A: CloudStack 3.0.2
>>>> Host B: XenServer 6.0.2
>>>> Host C: XenServer 6.0.2
>>>> Host D: XenServer 6.0.2
>>>> 
>>>> - All HV's are in a Pool and have 2 FC LUN's assigned to the pool as
>>>> well as Primary and secondary NFS storage.
>>>> - Cloudstack has a single zone, pod, cluster
>>>> - The 2 FC LUN's and one of the NFS exports is configured as primary
>>>> storage
>>>> - The remaining NFS export is configured as secondary storage
>>>> -  All of the system VM's are up and running
>>>> - We have 2 interfaces, the management one that is wide open and a
>>>> public / storage one which is tagged with various VLAN's
>>>> 
>>>> Problem:
>>>> When we go to add an ISO, it eventually comes back with an error in
>>>> the logs: "WARN  [storage.download.DownloadListener] (Timer-9:)
>>>> Entering download error state: timeout waiting for response from
>>>> storage host".
>>>> 
>>>> Troubleshooting Steps so Far:
>>>> - I have configured secstorage.allowed.internal.sites with the CIDR
>>>> block that all of our HV's are in.
>>>> - I have removed and re-added the secondary storage
>>>> - I have removed and re-added the secondary storage with an IP address
>>>> instead of the FQDN
>>>> - I have verified the various network segments are configured properly
>>>> with the right VLAN's and have verified in XenCenter that those VLAN's
>>>> show connected.
>>>> - I have been able to manually mount the secondary storage on the
>>>> master and all HV hosts.
>>>> 
>>>> At this point I am out of idea's and would love to get someone else's
>>>> take on this.
>>>> 
>>>> William Clark
>>>> 
>>> 
>> 
> 
> 

Reply via email to