[jira] [Created] (CLOUDSTACK-10292) Hostname in metadata when using external DNS is incorrect

2018-02-15 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-10292:
--

 Summary: Hostname in metadata when using external DNS is incorrect
 Key: CLOUDSTACK-10292
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10292
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


In the current implementation, both the local-hostname and instance-id in the 
metadata are pointing to same thing when VM is deployed in a network with 
Network Offering that has no services. 

The metadata for the VM is presented in configdrive.iso for a VM having network 
with network offering that has no services. However the metadata doesn't have 
any info around the name of the VM user has passed when deploying the VM, 
instead the local_hostname.txt in the metadata refers to VM internal name 
"i-12-254-VM" doesn't it needs to be the name of the VM user has passed when 
creating the VM?

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CLOUDSTACK-10108) ConfigKey based approach for reading 'ping' configuaration for Management Server

2017-10-10 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-10108:
--

 Summary: ConfigKey based approach for reading 'ping' 
configuaration for Management Server
 Key: CLOUDSTACK-10108
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10108
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini
Priority: Minor


In CLOUDSTACK-9886, we are reading ping.interval and ping.timeout using 
configdao which involves direct reading of DB. So, replace it with ConfigKey 
based approach



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10090) createPortForwardingRule api call accepts 'halt' as Protocol which Stops VR

2017-09-25 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-10090:
--

 Summary: createPortForwardingRule api call accepts 'halt' as 
Protocol which Stops VR 
 Key: CLOUDSTACK-10090
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10090
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


When we run the createPortForwardingRule API with input as Protocol as halt the 
PF rule is added however Halt is executed on VR. Hence the VR is stopped.

Following entry added to Firewall_Rules table and VirtualRouter went to 
halt(stopped)
mysql> select * from firewall_rules where id = 7 

*** 1. row ***
id: 7
uuid: XXX
ip_address_id: 13
start_port: 222
end_port: 222
state: Revoke
protocol: `halt`
purpose: PortForwarding
account_id: 2
domain_id: 1
network_id: 208
xid: XX
created: 2017-09-04 04:48:16
icmp_code: NULL
icmp_type: NULL
related: NULL
type: User
vpc_id: NULL
traffic_type: NULL
display: 1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10054) Volume download times out in 3600 seconds

2017-08-22 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-10054:
--

 Summary: Volume download times out in 3600 seconds
 Key: CLOUDSTACK-10054
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10054
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CLOUDSTACK-9958) Include tags of resources in listUsageRecords API

2017-08-21 Thread mrunalini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mrunalini updated CLOUDSTACK-9958:
--
Summary: Include tags of resources in listUsageRecords API  (was: Include 
tags of resources in lisUsageRecords API)

> Include tags of resources in listUsageRecords API
> -
>
> Key: CLOUDSTACK-9958
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9958
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: mrunalini
>
>  Tags field to be included in the listusagerecords response such that it can 
> be used in billing report. E.g.
> "tags":[{"key":"city","value":"Toronto","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa031","domain":"ROOT"},{"key":"region","value":"canada","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa031","domain":"ROOT"}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-10019) template.properties has hardcoded id

2017-07-26 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-10019:
--

 Summary: template.properties has hardcoded id
 Key: CLOUDSTACK-10019
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-10019
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


template.properties file created after creating template from snapshot has 
hardcoded id = 1

id should have templateId



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-9958) Include tags of resources in lisUsageRecords API

2017-06-13 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-9958:
-

 Summary: Include tags of resources in lisUsageRecords API
 Key: CLOUDSTACK-9958
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9958
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


 Tags field to be included in the listusagerecords response such that it can be 
used in billing report. E.g.

"tags":[{"key":"city","value":"Toronto","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa031","domain":"ROOT"},{"key":"region","value":"canada","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa031","domain":"ROOT"}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CLOUDSTACK-9950) listUsageRecords doesnt return required fields

2017-06-08 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-9950:
-

 Summary: listUsageRecords doesnt return required fields
 Key: CLOUDSTACK-9950
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9950
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Usage
Reporter: mrunalini


There is no cpuspeed, cpunumber or memory details in the listUsageRecords 
output as documented

In DB (cloud_usage table) we have cpu_speed, cpu_cores and ram fileds, but 
these are not populated for all the VM's. These fields are only populated for 
the VM's which are deployed with custom service offerings.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9905) VPN Gateway with Public Subnet

2017-05-04 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-9905:
-

 Summary: VPN Gateway with Public Subnet 
 Key: CLOUDSTACK-9905
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9905
 Project: CloudStack
  Issue Type: Improvement
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


When we attempt to use a /24 subnet with a public IP ranges, for 
example,153.97.140.0/24. VPN Customer Gateways can be created with this type of 
CIDR, but cannot be updated, for example to 153.97.181.0/24 . Attempting to do 
so produces the error "The customer gateway cidr list 153.97.181.0/24 contains 
invalid guest cidr!"

REPRO STEPS
==
I was able to repro this in 4.5.1
1) Created a new VPN Customer Gateway using the same settings as the customer
2) Attempted to change the CIDR list entry from 153.97.180.0/24 to 
153.97.181.0/24
3) The UI became unresponsive
4) The Management-server log shows the following:
2017-03-31 17:10:42,471 WARN [c.c.u.n.NetUtils] 
(API-Job-Executor-9:ctx-ed9b5816 job-172 ctx-32369258) (logid:3a16f24b) cidr 
153.97.181.0/24 is not RFC 1918 compliant
153.97.181.0/24
EXPECTED BEHAVIOR
==
Users should be able to update existing VPN Customer Gateway CIDR list as needed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9886) After restarting cloudstack-management , It takes time to connect hosts

2017-04-20 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-9886:
-

 Summary: After restarting cloudstack-management , It takes time to 
connect hosts
 Key: CLOUDSTACK-9886
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9886
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


More the value of 'ping.interval' and 'ping.timeout' , more the time taken for 
host reconnection after management server restart.

Solution : There is a timer task which connect to host based on values from 
ping timeout and ping interval.
(System.currentTimeMillis() >> 10) - (ping.timeout * ping.interval)
So if this values are large, host will not be connected immediately after 
restart.
After management server restart, host should be connected irrespective of ping 
intervals.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9812) Update "updatePortForwardingRule" pi to include additional parameter to update the end port in case of port range

2017-03-03 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-9812:
-

 Summary: Update "updatePortForwardingRule" pi to include 
additional parameter to update the end port in case of port range
 Key: CLOUDSTACK-9812
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9812
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


- Configure a PF rule Private port : Start port ; 20 ENd POrt 25 || Public Port 
: Start port 20 ; ENd Port : 25.
- Trigger UpdatePortForwardingRule api
- ApI fails with following error : " Unable to update the private port of port 
forwarding rule as the rule has port range "

Expected behaviour -

Port range should get modified



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CLOUDSTACK-9742) Simultaneous snapshots for detached volume

2017-01-16 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-9742:
-

 Summary: Simultaneous snapshots for detached volume
 Key: CLOUDSTACK-9742
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9742
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


A detached volume on any hypervisor (Xen, Vmware, KVM) fails to create one of 
the snapshots if it there are two scheduled snapshots at the same time for the 
same volume. One of that goes to Allocated state. any combination of 
weekly,daily and hourly simultaneous snapshot creation on a detached volume 
results one of the snapshots in Allocated state forever which can not be deleted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9741) Simultaneous snapshots for detached volume

2017-01-16 Thread mrunalini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mrunalini updated CLOUDSTACK-9741:
--
Description: 

A detached volume in VMware( I coud confirm that the issue affects XenServer as 
well in lab repro) fails to create one of the snapshots if it there are two 
scheduled snapshots at the same time for the same volume. One of that goes to 
Allocated state.
In Hoopa we have introduced a new feature for allowing simultaneous snapshots 
for volumes which are "associated with a VM".
 I could notice that any combination of weekly,daily and hourly simultaneous 
snapshot creation on a detached volume results one of the snapshots in 
Allocated state forever.




  was:
==
KDDI has reported an issue where a detached volume in VMware( I coud confirm 
that the issue affects XenServer as well in lab repro) fails to create one of 
the snapshots if it there are two scheduled snapshots at the same time for the 
same volume. One of that goes to Allocated state.
In Hoopa we have introduced a new feature for allowing simultaneous snapshots 
for volumes which are "associated with a VM".
Support has already communicated to KDDI that simultaneous snapshots on a 
volume is not supported in the version they are using which is 4.3.0.2, or 
rather it is a limitation. KDDI is concerned that this is not documented.
Before having further discussions with KDDI, I have reproduced the issue lab 
with ACP 4.3.0.2 + XenServer. I could notice that any combination of 
weekly,daily and hourly simultaneous snapshot creation on a detached volume 
results one of the snapshots in Allocated state forever.
It might be possibly to have the same result by executing the manual snapshots 
at the same time.I would like to know the following:
1. Is this a limitation in ACP? Please share the details. 
2. Does it have any dependency on hypervisor?
3. My repro of simultaneous snapshots for an attached volume in a version 
earlier to Hoopa did work, what is the improvement in Hoopa?
I have the repro setup,DEBUG as well as TRACE logs available.
http://10.112.6.37:8080/client/
ACP: admin/password
SSH /MYSQL: root/password
logs and DB: /root/kddi-snapshot-issue



> Simultaneous snapshots for detached volume
> --
>
> Key: CLOUDSTACK-9741
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9741
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: mrunalini
>
> A detached volume in VMware( I coud confirm that the issue affects XenServer 
> as well in lab repro) fails to create one of the snapshots if it there are 
> two scheduled snapshots at the same time for the same volume. One of that 
> goes to Allocated state.
> In Hoopa we have introduced a new feature for allowing simultaneous snapshots 
> for volumes which are "associated with a VM".
>  I could notice that any combination of weekly,daily and hourly simultaneous 
> snapshot creation on a detached volume results one of the snapshots in 
> Allocated state forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-9741) Simultaneous snapshots for detached volume

2017-01-16 Thread mrunalini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-9741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mrunalini updated CLOUDSTACK-9741:
--
Description: 
==
KDDI has reported an issue where a detached volume in VMware( I coud confirm 
that the issue affects XenServer as well in lab repro) fails to create one of 
the snapshots if it there are two scheduled snapshots at the same time for the 
same volume. One of that goes to Allocated state.
In Hoopa we have introduced a new feature for allowing simultaneous snapshots 
for volumes which are "associated with a VM".
Support has already communicated to KDDI that simultaneous snapshots on a 
volume is not supported in the version they are using which is 4.3.0.2, or 
rather it is a limitation. KDDI is concerned that this is not documented.
Before having further discussions with KDDI, I have reproduced the issue lab 
with ACP 4.3.0.2 + XenServer. I could notice that any combination of 
weekly,daily and hourly simultaneous snapshot creation on a detached volume 
results one of the snapshots in Allocated state forever.
It might be possibly to have the same result by executing the manual snapshots 
at the same time.I would like to know the following:
1. Is this a limitation in ACP? Please share the details. 
2. Does it have any dependency on hypervisor?
3. My repro of simultaneous snapshots for an attached volume in a version 
earlier to Hoopa did work, what is the improvement in Hoopa?
I have the repro setup,DEBUG as well as TRACE logs available.
http://10.112.6.37:8080/client/
ACP: admin/password
SSH /MYSQL: root/password
logs and DB: /root/kddi-snapshot-issue


  was:When two simultaneous snapshots are craetes for a same volume, same vm,  
one snapshot will created at a time and other will picked up in next snapshot 
poll run


> Simultaneous snapshots for detached volume
> --
>
> Key: CLOUDSTACK-9741
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9741
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>Reporter: mrunalini
>
> ==
> KDDI has reported an issue where a detached volume in VMware( I coud confirm 
> that the issue affects XenServer as well in lab repro) fails to create one of 
> the snapshots if it there are two scheduled snapshots at the same time for 
> the same volume. One of that goes to Allocated state.
> In Hoopa we have introduced a new feature for allowing simultaneous snapshots 
> for volumes which are "associated with a VM".
> Support has already communicated to KDDI that simultaneous snapshots on a 
> volume is not supported in the version they are using which is 4.3.0.2, or 
> rather it is a limitation. KDDI is concerned that this is not documented.
> Before having further discussions with KDDI, I have reproduced the issue lab 
> with ACP 4.3.0.2 + XenServer. I could notice that any combination of 
> weekly,daily and hourly simultaneous snapshot creation on a detached volume 
> results one of the snapshots in Allocated state forever.
> It might be possibly to have the same result by executing the manual 
> snapshots at the same time.I would like to know the following:
> 1. Is this a limitation in ACP? Please share the details. 
> 2. Does it have any dependency on hypervisor?
> 3. My repro of simultaneous snapshots for an attached volume in a version 
> earlier to Hoopa did work, what is the improvement in Hoopa?
> I have the repro setup,DEBUG as well as TRACE logs available.
> http://10.112.6.37:8080/client/
> ACP: admin/password
> SSH /MYSQL: root/password
> logs and DB: /root/kddi-snapshot-issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-9741) Simultaneous snapshots for detached volume

2017-01-16 Thread mrunalini (JIRA)
mrunalini created CLOUDSTACK-9741:
-

 Summary: Simultaneous snapshots for detached volume
 Key: CLOUDSTACK-9741
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9741
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
Reporter: mrunalini


When two simultaneous snapshots are craetes for a same volume, same vm,  one 
snapshot will created at a time and other will picked up in next snapshot poll 
run



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)