Hi Imesh,

I just successfully deployed ESB 4.81 cartridge (manager + workers) on 
openstack right now, but I had to make some changes on cartridge configuration:

1) on /etc/puppet/manifests/nodes/base.pp I had to add $mb_url = 
"tcp://[mb_ip]:[mb_port]" since PCA complained about the missing MB information

2) on plugin plugins/wso2esb-481-startup-handler.py on line 75 i received error 
on MB_IP since PCA looks in values array with MB_IP keys and it don't find any 
MB_IP property. Since I didn't want to recreate the full cartridge, groups and 
application I just hardcoded MB_IP to "tcp://[mb_ip]:[mb_port]", but I suppose 
that I just need to create the cartridge with MB_IP property. This was not 
documented and present on sample cartridge.

Now I have just few questions:

1) in PPaas 4.0.0 for each cartridge it was automatically spawned and 
configured an LB. Now not, plus I should need to pass LB_IP on cartridge 
definition, so I suppose it requires a manually loadel LB (WSO2 ELB was great). 
Am I right?

2) I saw that configurator only supports WSO2 AM, IS and ESB. For other WSO2 
Products (I need MB, IS, DES (or CEP + BAM) and EMS) can I use old PPAS 4.0.0 
cartridges and puppet definitions?

Thank you very much,

Marco
________________________________
Da: Imesh Gunaratne [[email protected]]
Inviato: venerdì 11 settembre 2015 14.43
A: Monaco Marco
Cc: Anuruddha Liyanarachchi; WSO2 Developers' List
Oggetto: Re: Strange Error in WSO2 Private Paas 4.1.2

Hi Marco,

On Fri, Sep 11, 2015 at 12:49 PM, Monaco Marco 
<[email protected]<mailto:[email protected]>> wrote:
Hi Imesh,

thanks for suggestion, I solved deleting the entire VM of stratos and creating 
everything new from scratch. Anyway even if I can spawn new instances I'm still 
facing problems here: i forgot to modify 
./repository/conf/cartridge-config.properties and now in lauch-params file I 
have the wrong puppet master ip. I fixed the file after the first try and also 
restarted stratos but when I try to deploy instances I still see that 
launch-params has wrong configurations.

I'm sorry this is a bug. You could overcome this for the moment by deleting the 
application and creating it again. The problem is that the puppet master 
properties are added to the payload once an application is created.

I noticed the same with iaas configuration: if I want to change some iaas 
properties in cloud-controller.xml file after the first lauch of Stratos, such 
modification does not take place. The only way is to delete all files, H2 db 
and mysql db and configure everything from scratch.

Yes this is similar to the above issue, if you redeploy the cartridge this 
problem can be solved. Here the problem is that we have a IaaS configuration 
cache in cloud controller. It first take the configuration from the 
cloud-controller.xml and then apply values specified in the cartridge 
definition (to be overwritten) on top of that. This cache is not updated once 
we update the cloud-controller.xml file.

Just to clarify, except for the above issues were you able to deploy an 
application in OpenStack successfully with Private PaaS 4.1.0? If not please 
let us know, we can arrange a google hangout and have a look.

Thanks

On Fri, Sep 11, 2015 at 12:49 PM, Monaco Marco 
<[email protected]<mailto:[email protected]>> wrote:
Hi Imesh,

thanks for suggestion, I solved deleting the entire VM of stratos and creating 
everything new from scratch. Anyway even if I can spawn new instances I'm still 
facing problems here: i forgot to modify 
./repository/conf/cartridge-config.properties and now in lauch-params file I 
have the wrong puppet master ip. I fixed the file after the first try and also 
restarted stratos but when I try to deploy instances I still see that 
launch-params has wrong configurations.

I noticed the same with iaas configuration: if I want to change some iaas 
properties in cloud-controller.xml file after the first lauch of Stratos, such 
modification does not take place. The only way is to delete all files, H2 db 
and mysql db and configure everything from scratch.

Hope you guys have a method to change such configuration without require a 
fresh reinstall of the product...

Thank you as usual,

Marco
________________________________
Da: Imesh Gunaratne [[email protected]<mailto:[email protected]>]
Inviato: mercoledì 9 settembre 2015 18.58
A: Monaco Marco
Cc: Anuruddha Liyanarachchi; WSO2 Developers' List

Oggetto: Re: Strange Error in WSO2 Private Paas 4.1.2

Hi Marco,

What's Private PaaS distribution you are using? Can you please try with 
4.1.0-Alpha:
https://svn.wso2.org/repos/wso2/scratch/PPAAS/wso2ppaas-4.1.0-ALPHA/

Thanks

On Wed, Sep 9, 2015 at 11:22 AM, Monaco Marco 
<[email protected]<mailto:[email protected]>> wrote:
Hi Anuruddha,

many thanks. I did it also with gui pasting JSON, and did it with UI, too.

When I do with GUI it justs hangs for a while and came back to previous page... 
nothing is logged, also with DEBUG loglevel.

Still stucked at this point.

Marco


Inviato dal mio dispositivo Samsung


-------- Messaggio originale --------
Da: Anuruddha Liyanarachchi <[email protected]<mailto:[email protected]>>
Data: 09/09/2015 07:47 (GMT+01:00)
A: Monaco Marco <[email protected]<mailto:[email protected]>>
Cc: WSO2 Developers' List <[email protected]<mailto:[email protected]>>, 
[email protected]<mailto:[email protected]>
Oggetto: Re: Strange Error in WSO2 Private Paas 4.1.2

Hi Marco,

This error occur when the cartridge definition doesn't contain iaasProvider 
section. I guess the path to your cartridge json is incorrect in the curl 
command.

I deployed the same cartridge json, that you are using and I did not face any 
issue.
Can you switch to JSON view from the UI and paste the cartridge definition and 
try to add it [1].

[1] 
https://drive.google.com/file/d/0B0v957zZwVWrQjVWX2x6YzI3Mkk/view?usp=sharing 
<https://drive.google.com/file/d/0B0v957zZwVWrQjVWX2x6YzI3Mkk/view?usp=sharing>


On Wed, Sep 9, 2015 at 12:39 AM, Monaco Marco 
<[email protected]<mailto:[email protected]>> wrote:
Anuruddha,

thank you for the suggestion.

I made a git pull from repo without deleting DBs (just replacing files) and it 
started to spawn new instances, but it complained about public ip addresses and 
stopping.

i made a fresh installation deleting all SQL dbs, but now we are stucked on 
another error.

I successfully set up again Network Partition, Autoscale Policies, Deployment 
Policies, but when I came to cartridge Stratos started to missbehave.

If I try to set up a cartridge on GUI it just stucks and then came back to Add 
Cartridge page.

I tryied to use api (https://127.0.0.1:9443/api/cartridges) sending this json:

{
    "type": "wso2esb-481-manager",
    "category": "framework",
    "provider": "wso2",
    "host": "esb.alma.it<http://esb.alma.it>",
    "displayName": "WSO2 ESB 4.8.1 Manager",
    "description": "WSO2 ESB 4.8.1 Manager Cartridge",
    "version": "4.8.1",
    "multiTenant": false,
    "loadBalancingIPType": "private",
    "portMapping": [
        {
            "name": "mgt-http",
            "protocol": "http",
            "port": 9763,
            "proxyPort": 0
        },
        {
            "name": "mgt-https",
            "protocol": "https",
            "port": 9443,
            "proxyPort": 0
        },
        {
            "name": "pt-http",
            "protocol": "http",
            "port": 8280,
            "proxyPort": 0
        },
        {
            "name": "pt-https",
            "protocol": "https",
            "port": 8243,
            "proxyPort": 0
        }
    ],
    "iaasProvider": [
        {
            "type": "openstack",
            "imageId": "RegionOne/c2951a15-47b7-4f9c-a6e0-d3b7a50bc9aa",
            "networkInterfaces": [
                {
                    "networkUuid": "bd02ca5c-4a57-45c3-8478-db0624829bdb"
                }
            ],
            "property": [
                {
                    "name": "instanceType",
                    "value": "RegionOne/3"
                },
                {
                    "name": "securityGroups",
                    "value": "default"
                },
                {
                    "name": "autoAssignIp",
                    "value": "true"
                },
                {
                    "name": "keyPair",
                    "value": "alma-keypair"
                }
            ]
        }
    ],
    "property": [
        {
            "name": "payload_parameter.CONFIG_PARAM_CLUSTERING",
            "value": "true"
        },
        {
            "name": "payload_parameter.LB_IP",
            "value": "<LOAD_BALANCER_IP>"
        }
    ]
}

and getting this response:

{"status":"error","message":"IaaS providers not found in cartridge: null"}

in stratos logs I can see the same errors:

[2015-09-08 19:00:26,600] ERROR 
{org.apache.stratos.rest.endpoint.handlers.CustomExceptionMapper} -  IaaS 
providers not found in cartridge: null
org.apache.stratos.rest.endpoint.exception.RestAPIException: IaaS providers not 
found in cartridge: null
        at 
org.apache.stratos.rest.endpoint.api.StratosApiV41Utils.addCartridge(StratosApiV41Utils.java:126)
        at 
org.apache.stratos.rest.endpoint.api.StratosApiV41.addCartridge(StratosApiV41.java:292)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
        at 
org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
        at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
        at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:100)
        at 
org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:57)
        at 
org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:93)
        at 
org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)
        at 
org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
        at 
org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
        at 
org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
        at 
org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:203)
        at 
org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:137)
        at 
org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:159)
        at 
org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
        at 
org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
        at 
org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
        at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)


and i'm not able to add any cartridge, including simple php.

I triple checked all IaaS settings in cloud-controller.xml and in cartridge 
definitions and they are correct.

Any throught?

Tanks....

Marco
________________________________
Da: Anuruddha Liyanarachchi [[email protected]<mailto:[email protected]>]
Inviato: martedì 8 settembre 2015 15.46
A: Monaco Marco; WSO2 Developers' List
Cc: [email protected]<mailto:[email protected]>
Oggetto: Re: Strange Error in WSO2 Private Paas 4.1.2

[Removing stratos dev and adding wso2dev ]

Hi Marco,

We have fixed this in the master branch with commit [1]. Please take a pull 
from master branch or download the alpha release pack from [2].

[1]https://github.com/wso2/product-private-paas/commit/54a77e9d85538a20ace00c57ec9e8aed410fb773
[2]https://svn.wso2.org/repos/wso2/scratch/PPAAS/wso2ppaas-4.1.0-ALPHA/

On Tue, Sep 8, 2015 at 7:06 PM, Monaco Marco 
<[email protected]<mailto:[email protected]>> wrote:
Hi,

We have successfully installed WSO2 PPaaS 4.1.2 on our Openstack IaaS 
Environment.

After following this procedure 
(https://docs.wso2.com/display/PP410/Deploy+Private+PaaS+in+OpenStack) we are 
able to open the Private PaaS console and configure Network Partitions, 
Autoscale Policies, Deployment, Cartridge, ecc..

We have problems trying to deploy applications. We tested both PHP and WSO2ESB 
applications, also using Mock IaaS, but we receive always the same error:

ERROR {org.apache.stratos.autoscaler.rule.AutoscalerRuleEvaluator} -  Unable to 
Analyse Expression log.debug("[scaling] Number of required instances based on 
stats: " + numberOfRequiredInstances + " " +
                "[active instances count] " + activeInstancesCount + " 
[network-partition] " +
                clusterInstanceContext.getNetworkPartitionId() + " [cluster] " 
+ clusterId);

        int nonTerminatedMembers = 
clusterInstanceContext.getNonTerminatedMemberCount();
        if(scaleUp){

            int clusterMaxMembers = 
clusterInstanceContext.getMaxInstanceCount();
            if (nonTerminatedMembers < clusterMaxMembers) {

                int additionalInstances = 0;
                if(clusterMaxMembers < numberOfRequiredInstances){

                    additionalInstances = clusterMaxMembers - 
nonTerminatedMembers;
                    log.info<http://log.info>("[scale-up] Required member count 
based on stat based scaling is higher than max, hence"
                            + " notifying to parent for possible group scaling 
or app bursting. [cluster] " + clusterId
                            + " [instance id]" + clusterInstanceContext.getId() 
+ " [max] " + clusterMaxMembers
                            + " [number of required instances] " + 
numberOfRequiredInstances
                            + " [additional instances to be created] " + 
additionalInstances);
                    delegator.delegateScalingOverMaxNotification(clusterId, 
clusterInstanceContext.getNetworkPartitionId(),
                        clusterInstanceContext.getId());
                } else {

                    additionalInstances = numberOfRequiredInstances - 
nonTerminatedMembers;
                }

                clusterInstanceContext.resetScaleDownRequestsCount();

                log.debug("[scale-up] " + " [has scaling dependents] " + 
clusterInstanceContext.hasScalingDependants() +
                    " [cluster] " + clusterId );
                if(clusterInstanceContext.hasScalingDependants()) {

                    log.debug("[scale-up] Notifying dependencies [cluster] " + 
clusterId);
                    delegator.delegateScalingDependencyNotification(clusterId, 
clusterInstanceContext.getNetworkPartitionId(),
                        clusterInstanceContext.getId(), 
numberOfRequiredInstances, clusterInstanceContext.getMinInstanceCount());
                } else {

                    boolean partitionsAvailable = true;
                    int count = 0;

                    String autoscalingReason = (numberOfRequiredInstances == 
numberOfInstancesReuquiredBasedOnRif)?"Scaling up due to RIF, [Predicted Value] 
"+rifPredictedValue+" [Threshold] "+rifThreshold:(numberOfRequiredInstances== 
numberOfInstancesReuquiredBasedOnMemoryConsumption)?"Scaling up due to MC, 
[Predicted Value] "+mcPredictedValue+" [Threshold] "+mcThreshold:"Scaling up 
due to LA, [Predicted Value] "+laPredictedValue+" [Threshold] "+laThreshold;
                    autoscalingReason += " [Number of required instances] 
"+numberOfRequiredInstances+" [Cluster Max Members] "+clusterMaxMembers+" 
[Additional instances to be created] " + additionalInstances;


                    while(count != additionalInstances && partitionsAvailable){

                        ClusterLevelPartitionContext partitionContext = 
(ClusterLevelPartitionContext) 
partitionAlgorithm.getNextScaleUpPartitionContext(clusterInstanceContext.getPartitionCtxtsAsAnArray());
                        if(partitionContext != null){

                            log.info<http://log.info>("[scale-up] Partition 
available, hence trying to spawn an instance to scale up! " +
                                " [application id] " + applicationId +
                                " [cluster] " + clusterId + " [instance id] " + 
clusterInstanceContext.getId() +
                                " [network-partition] " + 
clusterInstanceContext.getNetworkPartitionId() +
                                " [partition] " + 
partitionContext.getPartitionId() +
                                " scaleup due to RIF: " + (rifReset && 
(rifPredictedValue > rifThreshold)) +
                                " [rifPredictedValue] " + rifPredictedValue + " 
[rifThreshold] " + rifThreshold +
                                " scaleup due to MC: " + (mcReset && 
(mcPredictedValue > mcThreshold)) +
                                " [mcPredictedValue] " + mcPredictedValue + " 
[mcThreshold] " + mcThreshold +
                                " scaleup due to LA: " + (laReset && 
(laPredictedValue > laThreshold)) +
                                " [laPredictedValue] " + laPredictedValue + " 
[laThreshold] " + laThreshold);

                            log.debug("[scale-up] " + " [partition] " + 
partitionContext.getPartitionId() + " [cluster] " + clusterId );
                            long scalingTime = System.currentTimeMillis();
                            delegator.delegateSpawn(partitionContext, 
clusterId, clusterInstanceContext.getId(), 
isPrimary,autoscalingReason,scalingTime);
                            count++;
                        } else {

                            log.warn("[scale-up] No more partition available 
even though " +
                             "cartridge-max is not reached!, [cluster] " + 
clusterId +
                            " Please update deployment-policy with new 
partitions or with higher " +
                             "partition-max");
                            partitionsAvailable = false;
                        }
                    }
                }
            } else {
                log.info<http://log.info>("[scale-up] Trying to scale up over 
max, hence not scaling up cluster itself and
                        notifying to parent for possible group scaling or app 
bursting.
                        [cluster] " + clusterId + " [instance id]" + 
clusterInstanceContext.getId() +
                        " [max] " + clusterMaxMembers);
                delegator.delegateScalingOverMaxNotification(clusterId, 
clusterInstanceContext.getNetworkPartitionId(),
                    clusterInstanceContext.getId());
            }
        } else if(scaleDown){

            if(nonTerminatedMembers > 
clusterInstanceContext.getMinInstanceCount){

                log.debug("[scale-down] Decided to Scale down [cluster] " + 
clusterId);
                if(clusterInstanceContext.getScaleDownRequestsCount() > 2 ){

                    log.debug("[scale-down] Reached scale down requests 
threshold [cluster] " + clusterId + " Count " +
                        clusterInstanceContext.getScaleDownRequestsCount());

                    if(clusterInstanceContext.hasScalingDependants()) {

                        log.debug("[scale-up] Notifying dependencies [cluster] 
" + clusterId);
                        
delegator.delegateScalingDependencyNotification(clusterId, 
clusterInstanceContext.getNetworkPartitionId(),
                            clusterInstanceContext.getId(), 
numberOfRequiredInstances, clusterInstanceContext.getMinInstanceCount());
                    } else{

                        MemberStatsContext selectedMemberStatsContext = null;
                        double lowestOverallLoad = 0.0;
                        boolean foundAValue = false;
                        ClusterLevelPartitionContext partitionContext = 
(ClusterLevelPartitionContext) 
partitionAlgorithm.getNextScaleDownPartitionContext(clusterInstanceContext.getPartitionCtxtsAsAnArray());
                        if(partitionContext != null){
                            log.info<http://log.info>("[scale-down] Partition 
available to scale down " +
                                " [application id] " + applicationId +
                                " [cluster] " + clusterId + " [instance id] " + 
clusterInstanceContext.getId() +
                                " [network-partition] " + 
clusterInstanceContext.getNetworkPartitionId() +
                                " [partition] " + 
partitionContext.getPartitionId() +
                                " scaledown due to RIF: " + (rifReset && 
(rifPredictedValue < rifThreshold)) +
                                " [rifPredictedValue] " + rifPredictedValue + " 
[rifThreshold] " + rifThreshold +
                                " scaledown due to MC: " + (mcReset && 
(mcPredictedValue < mcThreshold)) +
                                " [mcPredictedValue] " + mcPredictedValue + " 
[mcThreshold] " + mcThreshold +
                                " scaledown due to LA: " + (laReset && 
(laPredictedValue < laThreshold)) +
                                " [laPredictedValue] " + laPredictedValue + " 
[laThreshold] " + laThreshold
                            );

                            // In partition context member stat context, all 
the primary members need to be
                            // avoided being selected as the member to 
terminated


                            for(MemberStatsContext memberStatsContext: 
partitionContext.getMemberStatsContexts().values()){

                                if( 
!primaryMembers.contains(memberStatsContext.getMemberId()) ) {

                                LoadAverage loadAverage = 
memberStatsContext.getLoadAverage();
                                log.debug("[scale-down] " + " [cluster] "
                                    + clusterId + " [member] " + 
memberStatsContext.getMemberId() + " Load average: " + loadAverage);

                                MemoryConsumption memoryConsumption = 
memberStatsContext.getMemoryConsumption();
                                log.debug("[scale-down] " + " [partition] " + 
partitionContext.getPartitionId() + " [cluster] "
                                    + clusterId + " [member] " + 
memberStatsContext.getMemberId() + " Memory consumption: " +
                                    memoryConsumption);

                                double predictedCpu = 
delegator.getPredictedValueForNextMinute(loadAverage.getAverage(),
                                    
loadAverage.getGradient(),loadAverage.getSecondDerivative(), 1);
                                log.debug("[scale-down] " + " [partition] " + 
partitionContext.getPartitionId() + " [cluster] "
                                    + clusterId + " [member] " + 
memberStatsContext.getMemberId() + " Predicted CPU: " + predictedCpu);

                                double predictedMemoryConsumption = 
delegator.getPredictedValueForNextMinute(
                                    
memoryConsumption.getAverage(),memoryConsumption.getGradient(),memoryConsumption.getSecondDerivative(),
 1);
                                log.debug("[scale-down] " + " [partition] " + 
partitionContext.getPartitionId() + " [cluster] "
                                    + clusterId + " [member] " + 
memberStatsContext.getMemberId() + " Predicted memory consumption: " +
                                        predictedMemoryConsumption);

                                double overallLoad = (predictedCpu + 
predictedMemoryConsumption) / 2;
                                log.debug("[scale-down] " + " [partition] " + 
partitionContext.getPartitionId() + " [cluster] "
                                    + clusterId + " [member] " + 
memberStatsContext.getMemberId() + " Overall load: " + overallLoad);

                                if(!foundAValue){
                                    foundAValue = true;
                                    selectedMemberStatsContext = 
memberStatsContext;
                                    lowestOverallLoad = overallLoad;
                                } else if(overallLoad < lowestOverallLoad){
                                    selectedMemberStatsContext = 
memberStatsContext;
                                    lowestOverallLoad = overallLoad;
                                }


                              }

                            }
                            if(selectedMemberStatsContext != null) {
                                log.info<http://log.info>("[scale-down] Trying 
to terminating an instace to scale down!" );
                                log.debug("[scale-down] " + " [partition] " + 
partitionContext.getPartitionId() + " [cluster] "
                                    + clusterId + " Member with lowest overall 
load: " + selectedMemberStatsContext.getMemberId());

                                delegator.delegateTerminate(partitionContext, 
selectedMemberStatsContext.getMemberId());
                            }
                        }
                    }
                } else{
                     log.debug("[scale-down] Not reached scale down requests 
threshold. " + clusterId + " Count " +
                        clusterInstanceContext.getScaleDownRequestsCount());
                     clusterInstanceContext.increaseScaleDownRequestsCount();

                }
            } else {
                log.debug("[scale-down] Min is reached, hence not scaling down 
[cluster] " + clusterId + " [instance id]"
                    + clusterInstanceContext.getId());
                //if(clusterInstanceContext.isInGroupScalingEnabledSubtree()){

                    
delegator.delegateScalingDownBeyondMinNotification(clusterId, 
clusterInstanceContext.getNetworkPartitionId(),
                        clusterInstanceContext.getId());
                //}
            }
        }  else{
            log.debug("[scaling] No decision made to either scale up or scale 
down ... [cluster] " + clusterId + " [instance id]"
             + clusterInstanceContext.getId());

        };:
[Error: unable to resolve method using strict-mode: 
org.apache.stratos.autoscaler.rule.RuleTasksDelegator.delegateSpawn(org.apache.stratos.autoscaler.context.partition.ClusterLevelPartitionContext,
 java.lang.String, java.lang.String, java.lang.Boolean, java.lang.String, long)]
[Near : {... delegator.delegateSpawn(partitionContext ....}]

Once we try to deploy the application, it hangs and it's impossible  to remove 
it unless we erase and repopulate Stratos database. It comes up everytime we 
restart stratos (if we don't recreate DB), but in any case it's impossible to 
deploy any application.

These are configurations that we used for WSO2 ESB (we configured puppet side 
correctly according to 
https://docs.wso2.com/display/PP410/Configuring+Puppet+Master and 
https://github.com/wso2/product-private-paas/tree/master/cartridges/templates-modules/wso2esb-4.8.1).

Autoscaling policy:
{
    "id": "Autoscaling-WSO2",
    "loadThresholds": {
        "requestsInFlight": {
            "threshold": 20
        },
        "memoryConsumption": {
            "threshold": 80
        },
        "loadAverage": {
            "threshold": 120
        }
    }
}

DEPLOYMENT POLICY:
{
    "id": "Deployment-WSO2",
    "networkPartitions": [
        {
            "id": "NP1",
            "partitionAlgo": "round-robin",
            "partitions": [
                {
                    "id": "P1",
                    "partitionMax": 5,
                    "partitionMin": 1
                }
            ]
        }
    ]
}

APPLICATION POLICY:
{
    "id": "Application-WSO2",
    "algorithm": "one-after-another",
    "networkPartitions": [
        "NP1"
    ],
    "properties": [
    ]
}

CARTRIDGES:

MANAGER:
{
    "type": "wso2esb-481-manager",
    "category": "framework",
    "provider": "wso2",
    "host": "esb.alma.it<http://esb.alma.it>",
    "displayName": "WSO2 ESB 4.8.1 Manager",
    "description": "WSO2 ESB 4.8.1 Manager Cartridge",
    "version": "4.8.1",
    "multiTenant": false,
    "loadBalancingIPType": "public",
    "portMapping": [
        {
            "name": "mgt-http",
            "protocol": "http",
            "port": 9763,
            "proxyPort": 0
        },
        {
            "name": "mgt-https",
            "protocol": "https",
            "port": 9443,
            "proxyPort": 0
        },
        {
            "name": "pt-http",
            "protocol": "http",
            "port": 8280,
            "proxyPort": 0
        },
        {
            "name": "pt-https",
            "protocol": "https",
            "port": 8243,
            "proxyPort": 0
        }
    ],
    "iaasProvider": [
        {
            "type": "openstack",
            "imageId": "RegionOne/c2951a15-47b7-4f9c-a6e0-d3b7a50bc9aa",
            "property": [
                {
                    "name": "instanceType",
                    "value": "RegionOne/3"
                },
                {
                    "name": "keyPair",
                    "value": "alma_admin_keypair"
                },
                {
                    "name": "securityGroups",
                    "value": "default"
                }
            ],
            "networkInterfaces": [
                {
                    "networkUuid": "ea0edbc6-6d6d-4efe-b11c-7cb3cb78256f"
                }
            ]
        }
    ],
    "property": [
        {
            "name": "payload_parameter.CONFIG_PARAM_CLUSTERING",
            "value": "true"
        },
        {
            "name": "payload_parameter.LB_IP",
            "value": "<LOAD_BALANCER_IP>"
        }
    ]
}


WORKER:
{
    "type": "wso2esb-481-worker",
    "category": "framework",
    "provider": "wso2",
    "host": "esb.alma.it<http://esb.alma.it>",
    "displayName": "WSO2 ESB 4.8.1 Worker",
    "description": "WSO2 ESB 4.8.1 Worker Cartridge",
    "version": "4.8.1",
    "multiTenant": false,
    "loadBalancingIPType": "public",
    "portMapping": [
        {
            "name": "pt-http",
            "protocol": "http",
            "port": 8280,
            "proxyPort": 0
        },
        {
            "name": "pt-https",
            "protocol": "https",
            "port": 8243,
            "proxyPort": 0
        }
    ],
    "iaasProvider": [
        {
            "type": "openstack",
            "imageId": "RegionOne/c2951a15-47b7-4f9c-a6e0-d3b7a50bc9aa",
            "property": [
                {
                    "name": "instanceType",
                    "value": "RegionOne/3"
                },
                {
                    "name": "keyPair",
                    "value": "alma_admin_keypair"
                },
                {
                    "name": "securityGroups",
                    "value": "default"
                }
            ],
            "networkInterfaces": [
                {
                    "networkUuid": "ea0edbc6-6d6d-4efe-b11c-7cb3cb78256f"
                }
            ]
        }
    ],
    "property": [
        {
            "name": "payload_parameter.CONFIG_PARAM_CLUSTERING",
            "value": "true"
        },
        {
            "name": "payload_parameter.LB_IP",
            "value": "<LOAD_BALANCER_IP>"
        }
    ]
}



GRORUPING:
{
    "name": "wso2esb-481-group",
    "cartridges": [
        "wso2esb-481-manager",
        "wso2esb-481-worker"
    ],
    "dependencies": {
        "startupOrders": [
            {
                "aliases": [
                    "cartridge.wso2esb-481-manager",
                    "cartridge.wso2esb-481-worker"
                ]
            }
        ]
    }
}



APPLICATION:
{
    "applicationId": "wso2esb-481-application",
    "alias": "wso2esb-481-application",
    "multiTenant": true,
    "components": {
        "groups": [
            {
                "name": "wso2esb-481-group",
                "alias": "wso2esb-481-group",
                "deploymentPolicy": "Deployment-WSO2",
                "groupMinInstances": 1,
                "groupMaxInstances": 1,
                "cartridges": [
                    {
                        "type": "wso2esb-481-manager",
                        "cartridgeMin": 1,
                        "cartridgeMax": 1,
                        "subscribableInfo": {
                            "alias": "wso2esb-481-manager",
                            "autoscalingPolicy": "Autoscaling-WSO2"
                        }
                    },
                    {
                        "type": "wso2esb-481-worker",
                        "cartridgeMin": 2,
                        "cartridgeMax": 5,
                        "subscribableInfo": {
                            "alias": "wso2esb-481-worker",
                            "autoscalingPolicy": "Autoscaling-WSO2"
                        }
                    }
                ]
            }
        ]
    }
}


Can you please help us? we are stucked at this point.

Thank you all,

Marco



--
Thanks and Regards,
Anuruddha Lanka Liyanarachchi
Software Engineer - WSO2
Mobile : +94 (0) 712762611
Tel      : +94 112 145 345
a<mailto:[email protected]>[email protected]<mailto:[email protected]>



--
Thanks and Regards,
Anuruddha Lanka Liyanarachchi
Software Engineer - WSO2
Mobile : +94 (0) 712762611
Tel      : +94 112 145 345
a<mailto:[email protected]>[email protected]<mailto:[email protected]>



--
Imesh Gunaratne
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057<tel:%2B94%2077%20374%202057>
W: http://imesh.gunaratne.org
Lean . Enterprise . Middleware




--
Imesh Gunaratne
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.gunaratne.org
Lean . Enterprise . Middleware

_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to