[Dev] Getting null values when converting nested json objects into x-www-form-urlencoded format

2016-09-21 Thread Keerthika Mahendralingam
Hi All,

I am trying to convert json payload which have nested json objects into
x-www-form-urlencoded format. Nested element values are converted to null
when performing the transformation.

I am using the following configuration with ESB 4.9.0 and 5.0.0 as well:


http://ws.apache.org/ns/synapse;
   name="testProxy"
   transports="https,http"
   statistics="disable"
   trace="disable"
   startOnLoad="true">
   
  
 
{

"name":"user1",

"address":{"line1":"street1"},

"age":12

}

   


 
 
 
  
   
   


The output is :
name=user1==12

But it should be:
name=user1[line1]=street1=12

Is there any way to achieve this?

Thanks,
Keerthika.
-- 

Keerthika Mahendralingam
Software Engineer
Mobile :+94 (0) 776 121144
keerth...@wso2.com
WSO2, Inc.
lean . enterprise . middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [puppet] Setting up puppet home from script

2016-09-21 Thread Anuruddha Liyanarachchi
Hi Vishanth,

I have fixed the issue. Please take a pull.

Regards,
Anuruddha

On Wed, Sep 21, 2016 at 10:38 PM, Vishanth Balasubramaniam <
vishan...@wso2.com> wrote:

> Hi,
>
> That probably be the reason. And yes it should be fixed.
>
> Regards,
> Vishanth
>
> On Wed, Sep 21, 2016 at 11:06 AM, Anuruddha Liyanarachchi <
> anurudd...@wso2.com> wrote:
>
>> Hi Vishanth,
>>
>> "Declare -A" option is not working in mac os because bash version of mac
>> os is not supporting declaring associate arrays [1]. Associative array
>> support is available from shell version 4 and upwards, and mac os shell
>> version is 3.2.57.
>>
>> I am currently looking into an alternative method without using
>> associative arrays.
>>
>> [1] http://stackoverflow.com/questions/6047648/bash-4-associ
>> ative-arrays-error-declare-a-invalid-option
>>
>> On Tue, Sep 20, 2016 at 5:23 PM, Vishanth Balasubramaniam <
>> vishan...@wso2.com> wrote:
>>
>>> Hi Akila,
>>>
>>> With those changes, whatever product I specify it is setting up wso2das.
>>>
>>> For example, when I give *./setup.sh -p esb*, it is setting up wso2das
>>> module.
>>>
>>> Probably the declaration is not properly functioning.
>>>
>>> Regards,
>>> Vishanth
>>>
>>> On Mon, Sep 19, 2016 at 1:59 PM, Akila Ravihansa Perera <
>>> raviha...@wso2.com> wrote:
>>>
 Hi,

 I've improved the Puppet-Home setup script by introducing two map;
 product_code_to_name_map and product_name_to_module_repo_map. With
 this approach we can handle cases where product name and product code is
 different. For eg: API Manager code is "apim" while product name is
 "wso2am".

 Also I've added platform support for Hiera data as well. You can use
 "setup.sh -p  -l " to setup Puppet Home with
 Hiera data for a specific platform. If none given it will default to
 'default' platform.

 The relevant platform repo for the given product should contain a
 hieradata directory which will be symlink'd to PUPPET_HOME/hieradata. For
 eg: wso2esb kubernetes platform - https://github.com/wso2/kube
 rnetes-esb should contain a "hieradata" directory at the repo root
 level; https://github.com/wso2/kubernetes-esb/tree/master/hieradata

 Thanks.

 On Thu, Sep 8, 2016 at 9:59 AM, Anuruddha Liyanarachchi <
 anurudd...@wso2.com> wrote:

> Hi Pubudu,
>
> +1 for the platform support.
> I will add the platform support once we finalized the platform
> hieradata structure.
>
> On Thu, Sep 8, 2016 at 3:44 AM, Imesh Gunaratne 
> wrote:
>
>>
>>
>> On Wed, Sep 7, 2016 at 10:09 PM, Pubudu Gunatilaka 
>> wrote:
>>
>>>
>>> I think we need to include the platform as well. If we consider the
>>> big picture, ideally any user should be able to use this script and 
>>> create
>>> a puppet home for building docker images for Kubernetes, Mesos, or any
>>> other platforms. As we have separate repos for platform hieradata,  we 
>>> need
>>> to copy those hieradata to the puppet home repo.
>>>
>>
>> ​+1​
>>
>>
>>>
>>> Thank you!
>>>
>>>
>>> On Wed, Sep 7, 2016 at 8:21 PM, Imesh Gunaratne 
>>> wrote:
>>>


 On Wed, Sep 7, 2016 at 6:24 PM, Anuruddha Liyanarachchi <
 anurudd...@wso2.com> wrote:

> Hi Imesh,
>
> I have now added the ability to configure multiple products using
> comma separated product list.
> Also included '-p all' option which configures all the products.
>
> Ex: ./setup.sh -p as
> Ex: ./setup.sh -p as,esb,bps
> Ex: ./setup.sh -p all
>

 Great!
 ​Nice to hear that!

>
> On Wed, Sep 7, 2016 at 12:46 AM, Imesh Gunaratne 
> wrote:
>
>> Great work Anuruddha! The bash script works well!
>>
>> Shall we add the ability to install multiple product modules in
>> one go? Maybe we can use a comma separated product list with -p.
>>
>> Thanks
>>
>> On Tue, Sep 6, 2016 at 6:26 PM, Anuruddha Liyanarachchi <
>> anurudd...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> We have  created separate GitHub repos for Puppet modules so
>>> that the Puppet module can be shipped as part of a product release.
>>>
>>> Since modules are distributed we have introduced a script to
>>> generate PUPPET_HOME. The script will work as following.
>>>
>>>  1. Check puppet_home folder exists.
>>>
>>>  2. Create folder structure required for puppet_home.
>>> ├── hiera.yaml
>>> ├── hieradata
>>> ├── manifests
>>> └── modules
>>>
>>> 3.  Create a symlink to manifest/site.pp file.

Re: [Dev] Connectivity from ESB to other Apps using SSO

2016-09-21 Thread Malaka Silva
Hi Srikanth,

You will find samples in our documentation.

[1]
https://docs.wso2.com/display/ESBCONNECTORS/Salesforce+Connector+and+Streaming+Inbound+Endpoint

On Wed, Sep 21, 2016 at 6:50 PM, Srikanth Puppala <
puppala.srika...@gmail.com> wrote:

> Hi Manu,
>
> Thanks a lot for your response, I created new Salesforce developer
> Account. Now I am able to connect to SalesForce via ESB.
>
> Do we have any sample ESB sequence with salesforce operations (like a
> creation of sobject), any reference could be a great help.
>
> Thanks & Regards,
> Srikanth Puppala.
>
> On Wed, Sep 21, 2016 at 12:37 AM, Manuranga Perera  wrote:
>
>> Hi Srikanth,
>>
>> It seems the WSO2 Salesforce connector only works if you have a
>> Enterprise, Unlimited or Developer edition Salesforce account. If you have
>> a Professional account you will have to buy the API access feature form
>> Salesforce.
>>
>> Can you please let us know what this edition you are using.
>>
>> On Wed, Sep 21, 2016 at 10:38 AM, Manuranga Perera  wrote:
>>
>>> Following mail was send to me by Srikanth, forwarding since dev list has
>>> rejected, due to him not being resisted.
>>>
>>> Hi,

 I am having issue configuring ESB to talk to SalesForce. Tried
 following Approaches:


 *Approach-1*

 I started connecting to SalesForce using WSO2 SalesForce Connector. I
 ended up with following error from SalesForce:
 *Error, "API is not enabled for this Organization or Partner"*
 Then I tried enabling API's with different accounts like dev/enterprise
 etc and enabled all possible options to connect to SalesForce. Not very
 successful :(
 But major issue is,  all the clients who configure SalesForce may not
 have API's Enabled. In that case, how we deal with such accounts? Not sure
 what to do

 *Approach-2*

 I thought of connecting to salesforce via Identity Server. Created
 salesforce identity in IS, and configured sales force accordingly. But I am
 not clear how to call API of salesforce from ESB, is it possible?
 If not, please do let me know the best possible way to solve this
 problem using WSO2 stack.

 I really appreciate your response. :)

 --
 Thanks & Regards,

 Srikanth Puppala.

 6103065998

>>>
>>>
>>> --
>>> With regards,
>>> *Manu*ranga Perera.
>>>
>>> phone : 071 7 70 20 50
>>> mail : m...@wso2.com
>>>
>>
>>
>>
>> --
>> With regards,
>> *Manu*ranga Perera.
>>
>> phone : 071 7 70 20 50
>> mail : m...@wso2.com
>>
>
>
>
> --
> Thanks & Regards,
> Srikanth Puppala.
> 6103065998
>



-- 

Best Regards,

Malaka Silva
Senior Technical Lead
M: +94 777 219 791
Tel : 94 11 214 5345
Fax :94 11 2145300
Skype : malaka.sampath.silva
LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
Blog : http://mrmalakasilva.blogspot.com/

WSO2, Inc.
lean . enterprise . middleware
https://wso2.com/signature
http://www.wso2.com/about/team/malaka-silva/

https://store.wso2.com/store/

Don't make Trees rare, we should keep them with care
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Integration Cloud] Swagger Support for ESB REST APIs

2016-09-21 Thread Joseph Fonseka
Hi

On Tue, Sep 20, 2016 at 4:18 PM, Maheeka Jayasuriya 
wrote:

>
> Please note these changes were done based on the level of information we
> have on the API by referring to it's configuration. For example, we do not
> have a way of determining the request format or uri parameter types and
> content-types of the API. This will require further digging and analyzing
> on the configuration.
>

Have you consider adding additional information as annotations/notes in the
config. AFAIK we do not have a config to add annotations so currently we
can define them as properties but going forward may be we can add some way
to annotate synapse.

In the longer run mostly synapse APIs will be generated out of swagger
definitions thus having the ability to contain all the API definition
information in the synapse file would be important.

Thanks
Jo


>
> Find the diff of these changes done to the Swagger definition at [4]
>
> To get the required information from the Rest API configuration we can
> invoke RestApiAdminServices's getApiByName operation which returns the
> following response [5] for an API created that has the PetStore resources.
> We can identify the methods and uri-template and mappings from this
> response.
>
> Please let know your thoughts.
>
> Thanks,
> Maheeka
>
> [1] http://petstore.swagger.io/#/pet
> [2] https://gist.github.com/maheeka/4eaedd2e2e0765959a4166865bf9adf9
> [3] https://gist.github.com/maheeka/ec23751f21d8d7d5abaa4f9130f233f2
> [4] https://www.diffchecker.com/xEu0NSNz
> [5] http://schemas.
> xmlsoap.org/soap/envelope/">
>
>   http://org.apache.axis2/xsd;>
>  http://api.rest.carbon.wso2.org/xsd; xmlns:xsi="http://www.w3.org/
> 2001/XMLSchema-instance">
> 
> /pet
> PetstoreAPI.xml
> 
> false
> PetstoreAPI
> -1
> 
>
>
>
>
>
>POST
>PUT
>
>
>0
>
>/
>
> 
> 
>
>
>
>
>
>POST
>DELETE
>GET
>
>
>0
>/{petId}
>
>
> 
> false
> false
>  
>   
>
> 
>
>
> Thanks,
>
> Maheeka Jayasuriya
> Senior Software Engineer
> Mobile : +9450661
>



-- 

-- 
*Joseph Fonseka*
WSO2 Inc.; http://wso2.com
lean.enterprise.middleware

mobile: +94 772 512 430
skype: jpfonseka

* *
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] About the Carbon Core documentation and the usage ofMultitenantRESTServlet

2016-09-21 Thread wenxzhen
Append another questions below as I still didn't find any descriptions on the 
APIs:

from the AS 5.3.0 application model, I found a table named UM_TEMP_INVITE, 
where can I find the corresponding R/W API

for the UM_TENANT table, is the UM_ID/tenantID automatically incremented?
Please kindly help

Thanks, Wenxing




-- Original --
From:  "wenxzhen";;
Date:  Wed, Sep 21, 2016 08:36 PM
To:  "dev"; 

Subject:  [Dev] About the Carbon Core documentation and the usage 
ofMultitenantRESTServlet



Dear all,

When looking at the Carbon 449 Java docs [1], I found there are only 3 items. 
Are the 3 items complete? Why I ask this question is I didn't find the API docs 
for the package org.wso2.carbon.tenant.mgt?
Carbon core
Carbon registry
Carbon user management
Another question is that from Carbon 449 core document [2], I found there is a 
class org.wso2.carbon.core.multitenancy.MultitenantRESTServlet, how to use this 
class? any description for reference?

[1] https://docs.wso2.com/display/Carbon449/Java+Documentation
[2] http://product-dist.wso2.com/javadocs/carbon/4.4.9/core-docs/ 

Best Regards,
Wenxing<___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 Data Services Server 3.5.1 RC1

2016-09-21 Thread Anjana Fernando
Hi,

Tested the following:-

* Service Creation
  - SOAP / REST
  - Return Generated Keys
  - Input / Output Mappings Auto Generate
* Data Service Generate
* RequestBox
* Samples

[X] Stable - go ahead and release

Cheers,
Anjana.

On Sat, Sep 17, 2016 at 4:04 PM, Manuri Amaya Perera 
wrote:

> Hi Devs,
>
> This is the 1st release candidate of WSO2 Data Services Server 3.5.1.
>
> This release fixes the following issues:
> https://wso2.org/jira/issues/?filter=13343
>
> Please download,test and vote. Vote will be open for 72 hours or longer
> as needed.
>
> *Source & binary distribution files:*
>
> Runtime: https://github.com/wso2/product-dss/releases/tag/v3.5.1-RC1
> Tooling: https://github.com/wso2/devstudio-tooling-dss/
> releases/tag/v3.5.1-rc1
>
> *Maven staging repo*: https://maven.wso2.org/nexus/content/repositories/
> orgwso2dss-1007/
>
> *The tag to be voted upon:*
> Runtime: https://github.com/wso2/product-dss/releases/tag/v3.5.1-RC1
> Tooling: https://github.com/wso2/devstudio-tooling-dss/
> releases/tag/v3.5.1-rc1
>
>
> [ ] Broken - do not release (explain why)
> [ ] Stable - go ahead and release
>
>
> Thank you
> WSO2 Data Services Server Team
>
>
> --
>
> *Manuri Amaya Perera*
>
> *Software Engineer*
>
> *WSO2 Inc.*
>
> *Blog: http://manuriamayaperera.blogspot.com
> *
>
>
>


-- 
*Anjana Fernando*
Associate Director / Architect
WSO2 Inc. | http://wso2.com
lean . enterprise . middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [puppet] Setting up puppet home from script

2016-09-21 Thread Vishanth Balasubramaniam
Hi,

That probably be the reason. And yes it should be fixed.

Regards,
Vishanth

On Wed, Sep 21, 2016 at 11:06 AM, Anuruddha Liyanarachchi <
anurudd...@wso2.com> wrote:

> Hi Vishanth,
>
> "Declare -A" option is not working in mac os because bash version of mac
> os is not supporting declaring associate arrays [1]. Associative array
> support is available from shell version 4 and upwards, and mac os shell
> version is 3.2.57.
>
> I am currently looking into an alternative method without using
> associative arrays.
>
> [1] http://stackoverflow.com/questions/6047648/bash-4-
> associative-arrays-error-declare-a-invalid-option
>
> On Tue, Sep 20, 2016 at 5:23 PM, Vishanth Balasubramaniam <
> vishan...@wso2.com> wrote:
>
>> Hi Akila,
>>
>> With those changes, whatever product I specify it is setting up wso2das.
>>
>> For example, when I give *./setup.sh -p esb*, it is setting up wso2das
>> module.
>>
>> Probably the declaration is not properly functioning.
>>
>> Regards,
>> Vishanth
>>
>> On Mon, Sep 19, 2016 at 1:59 PM, Akila Ravihansa Perera <
>> raviha...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> I've improved the Puppet-Home setup script by introducing two map;
>>> product_code_to_name_map and product_name_to_module_repo_map. With this
>>> approach we can handle cases where product name and product code is
>>> different. For eg: API Manager code is "apim" while product name is
>>> "wso2am".
>>>
>>> Also I've added platform support for Hiera data as well. You can use
>>> "setup.sh -p  -l " to setup Puppet Home with
>>> Hiera data for a specific platform. If none given it will default to
>>> 'default' platform.
>>>
>>> The relevant platform repo for the given product should contain a
>>> hieradata directory which will be symlink'd to PUPPET_HOME/hieradata. For
>>> eg: wso2esb kubernetes platform - https://github.com/wso2/kubernetes-esb 
>>> should
>>> contain a "hieradata" directory at the repo root level;
>>> https://github.com/wso2/kubernetes-esb/tree/master/hieradata
>>>
>>> Thanks.
>>>
>>> On Thu, Sep 8, 2016 at 9:59 AM, Anuruddha Liyanarachchi <
>>> anurudd...@wso2.com> wrote:
>>>
 Hi Pubudu,

 +1 for the platform support.
 I will add the platform support once we finalized the platform
 hieradata structure.

 On Thu, Sep 8, 2016 at 3:44 AM, Imesh Gunaratne  wrote:

>
>
> On Wed, Sep 7, 2016 at 10:09 PM, Pubudu Gunatilaka 
> wrote:
>
>>
>> I think we need to include the platform as well. If we consider the
>> big picture, ideally any user should be able to use this script and 
>> create
>> a puppet home for building docker images for Kubernetes, Mesos, or any
>> other platforms. As we have separate repos for platform hieradata,  we 
>> need
>> to copy those hieradata to the puppet home repo.
>>
>
> ​+1​
>
>
>>
>> Thank you!
>>
>>
>> On Wed, Sep 7, 2016 at 8:21 PM, Imesh Gunaratne 
>> wrote:
>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 6:24 PM, Anuruddha Liyanarachchi <
>>> anurudd...@wso2.com> wrote:
>>>
 Hi Imesh,

 I have now added the ability to configure multiple products using
 comma separated product list.
 Also included '-p all' option which configures all the products.

 Ex: ./setup.sh -p as
 Ex: ./setup.sh -p as,esb,bps
 Ex: ./setup.sh -p all

>>>
>>> Great!
>>> ​Nice to hear that!
>>>

 On Wed, Sep 7, 2016 at 12:46 AM, Imesh Gunaratne 
 wrote:

> Great work Anuruddha! The bash script works well!
>
> Shall we add the ability to install multiple product modules in
> one go? Maybe we can use a comma separated product list with -p.
>
> Thanks
>
> On Tue, Sep 6, 2016 at 6:26 PM, Anuruddha Liyanarachchi <
> anurudd...@wso2.com> wrote:
>
>> Hi,
>>
>> We have  created separate GitHub repos for Puppet modules so that
>> the Puppet module can be shipped as part of a product release.
>>
>> Since modules are distributed we have introduced a script to
>> generate PUPPET_HOME. The script will work as following.
>>
>>  1. Check puppet_home folder exists.
>>
>>  2. Create folder structure required for puppet_home.
>> ├── hiera.yaml
>> ├── hieradata
>> ├── manifests
>> └── modules
>>
>> 3.  Create a symlink to manifest/site.pp file.
>>
>> 4. Clone wso2base puppet module into  /modules
>> directory.
>>
>> 5. Create a symlink to wso2base common.yaml hiera-file.
>>
>>> /modules/wso2base/hieradata/wso2/common.yaml ->
>>> /hieradata/dev/wso2/
>>
>>
>> 6. Clone wso2 puppet 

Re: [Dev] [Integration Cloud] Swagger Support for ESB REST APIs

2016-09-21 Thread Jagath Sisirakumara Ariyarathne
Hi All,

In addition to above, swagger definitions will be provided in json and yaml
formats. Initially it will be done for json. Definitions can be taken from
URLs which has swagger.json and swagger.yaml as query string.

Example:

API Name is SampleAPI

*Json format*
http://localhost:8280:/SampleAPI?swagger.json
http://localhost:8280:/t/abc.com/SampleAPI?swagger.json

*Yaml format*
http://localhost:8280:/SampleAPI?swagger.yaml
http://localhost:8280:/t/abc.com/SampleAPI?swagger.yaml

In order to generate these definition, we are planning to use two new
HttpGetProcessor implementations which can be configured in carbon.xml.

Thanks.

On Tue, Sep 20, 2016 at 4:18 PM, Maheeka Jayasuriya 
wrote:

> Hi all,
>
> We are researching on supporting Swagger definitions for REST APIs in ESB.
> This implementation is quite different from what APIM offers. APIM offers a
> Swagger to API configuration generation where as what we need to do here is
> API configuration to Swagger definition generation.
>
> As for the first iteration, I think we can focus on the minimum Swagger
> support where as main focus will be only on the HTTP verb and the
> uri-template or url-mapping.
>
> Following is what I tested to compose a minimal Swagger Definition. I
> started off from the Swagger Pet Store Sample [1] which was reduced to [2]
> and moved on to create [3]. This is IMO the minimum required of a Swagger
> Definition. We can improve from this, but for the time being this would
> suffice the requirement of Integration Cloud.
>
> Please note that I have made the following changes to the original Swagger
> definition of PetStore in order to achieve the final Swagger definition for
> ESB.
>   - Remove descriptions, summary, tags, operationId, schema, basepath,
> definitions and security elements
>   - accept and produce application/json and application/xml as default
> content types for all resources
>   - Include a default response code
>   - Add body as parameter for all POST and PUT operations in addition to
> the uri parameters
>   - All parameters are considered format binary and type String by default
>
> Please note these changes were done based on the level of information we
> have on the API by referring to it's configuration. For example, we do not
> have a way of determining the request format or uri parameter types and
> content-types of the API. This will require further digging and analyzing
> on the configuration.
>
> Find the diff of these changes done to the Swagger definition at [4]
>
> To get the required information from the Rest API configuration we can
> invoke RestApiAdminServices's getApiByName operation which returns the
> following response [5] for an API created that has the PetStore resources.
> We can identify the methods and uri-template and mappings from this
> response.
>
> Please let know your thoughts.
>
> Thanks,
> Maheeka
>
> [1] http://petstore.swagger.io/#/pet
> [2] https://gist.github.com/maheeka/4eaedd2e2e0765959a4166865bf9adf9
> [3] https://gist.github.com/maheeka/ec23751f21d8d7d5abaa4f9130f233f2
> [4] https://www.diffchecker.com/xEu0NSNz
> [5] http://schemas.
> xmlsoap.org/soap/envelope/">
>
>   http://org.apache.axis2/xsd;>
>  http://api.rest.carbon.wso2.org/xsd; xmlns:xsi="http://www.w3.org/
> 2001/XMLSchema-instance">
> 
> /pet
> PetstoreAPI.xml
> 
> false
> PetstoreAPI
> -1
> 
>
>
>
>
>
>POST
>PUT
>
>
>0
>
>/
>
> 
> 
>
>
>
>
>
>POST
>DELETE
>GET
>
>
>0
>/{petId}
>
>
> 
> false
> false
>  
>   
>
> 
>
>
> Thanks,
>
> Maheeka Jayasuriya
> Senior Software Engineer
> Mobile : +9450661
>



-- 
Jagath Ariyarathne
Technical Lead
WSO2 Inc.  http://wso2.com/
Email: jaga...@wso2.com
Mob  : +94 77 386 7048

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] About the Carbon Core documentation and the usage of MultitenantRESTServlet

2016-09-21 Thread wenxzhen
Dear all,

When looking at the Carbon 449 Java docs [1], I found there are only 3 items. 
Are the 3 items complete? Why I ask this question is I didn't find the API docs 
for the package org.wso2.carbon.tenant.mgt?
Carbon core
Carbon registry
Carbon user management
Another question is that from Carbon 449 core document [2], I found there is a 
class org.wso2.carbon.core.multitenancy.MultitenantRESTServlet, how to use this 
class? any description for reference?

[1] https://docs.wso2.com/display/Carbon449/Java+Documentation
[2] http://product-dist.wso2.com/javadocs/carbon/4.4.9/core-docs/ 

Best Regards,
Wenxing___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS][310] Errors while running APIM_INCREMENTAL_PROCESSING_SCRIPT

2016-09-21 Thread Malith Munasinghe
Hi All,

Thanks for the prompt responses we will do the needful.

Regards,
Malith

On Wed, Sep 21, 2016 at 2:54 PM, Rukshan Premathunga 
wrote:

> Hi Malith,
>
> cApp we provided to Analytics APIM will not work for the DAS because of
> the changes happen to the DAS 3.1.0. Because of that we need to use
> Analytics APIM or have to update capp with the above changes.
>
> Thanks and Regards.
>
> On Wed, Sep 21, 2016 at 2:48 PM, Niranda Perera  wrote:
>
>> Hi Malith,
>>
>> Yes, correct! you need to change the script.
>>
>> Additionally, there are some changes in the carbonJdbc connector as
>> well... so, you might need to watch out for it!
>>
>> Please check with the APIM team and ESB team whether we are doing a
>> feature release with the DAS 310 changes?
>>
>> cheers
>>
>> On Wed, Sep 21, 2016 at 5:11 AM, Malith Munasinghe 
>> wrote:
>>
>>> Hi All,
>>>
>>> While preparing a DAS 3.1.0 to run APIM Analytics I have added features
>>> as in [1]
>>> .
>>> After deploying the CApp for APIM Analytics I run in to below error.
>>> According to the error that *incrementalProcessing *is not a valid
>>> option. Also according to [2]
>>>  the
>>> syntax to parse this option is *incrementalParams. *In order to get DAS
>>> 3.1.0 to process APIM Analytics
>>> do we have to change the scripts with this option as well ?
>>>
>>>
>>> TID: [-1234] [] [2016-09-21 08:54:00,019] ERROR
>>> {org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService}
>>> -  Error while executing query : CREATE TEMPORARY TABLE
>>> APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
>>> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "year INT -i,
>>> month INT -i, day INT -i, hour INT -i, minute INT -i,consumerKey
>>> STRING, context STRING, api_version STRING, api STRING, version STRING,
>>> requestTime LONG, userId STRING, hostName STRING,apiPublisher STRING,
>>> total_request_count LONG, resourceTemplate STRING, method STRING,
>>> applicationName STRING, tenantDomain STRING,userAgent STRING,
>>> resourcePath STRING, request INT, applicationId STRING, tier STRING,
>>> throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
>>> _timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
>>> consumerKey, context, api_version, userId, hostName, apiPublisher,
>>> resourceTemplate, method, userAgent, clientIp",incrementalProcessing
>>> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
>>> {org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService}
>>> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
>>> Exception in executing query CREATE TEMPORARY TABLE
>>> APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
>>> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "year INT -i,
>>> month INT -i, day INT -i, hour INT -i, minute INT -i,consumerKey
>>> STRING, context STRING, api_version STRING, api STRING, version STRING,
>>> requestTime LONG, userId STRING, hostName STRING,apiPublisher STRING,
>>> total_request_count LONG, resourceTemplate STRING, method STRING,
>>> applicationName STRING, tenantDomain STRING,userAgent STRING,
>>> resourcePath STRING, request INT, applicationId STRING, tier STRING,
>>> throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
>>> _timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
>>> consumerKey, context, api_version, userId, hostName, apiPublisher,
>>> resourceTemplate, method, userAgent, clientIp",incrementalProcessing
>>> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
>>> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalytics
>>> Executor.executeQueryLocal(SparkAnalyticsExecutor.java:764)
>>> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalytics
>>> Executor.executeQuery(SparkAnalyticsExecutor.java:721)
>>> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcesso
>>> rService.executeQuery(CarbonAnalyticsProcessorService.java:201)
>>> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcesso
>>> rService.executeScript(CarbonAnalyticsProcessorService.java:151)
>>> at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(A
>>> nalyticsTask.java:60)
>>> at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute
>>> (TaskQuartzJobAdapter.java:67)
>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>>> s.java:471)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1145)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:745)
>>> 

Re: [Dev] [DAS][310] Errors while running APIM_INCREMENTAL_PROCESSING_SCRIPT

2016-09-21 Thread Rukshan Premathunga
Hi Malith,

cApp we provided to Analytics APIM will not work for the DAS because of the
changes happen to the DAS 3.1.0. Because of that we need to use Analytics
APIM or have to update capp with the above changes.

Thanks and Regards.

On Wed, Sep 21, 2016 at 2:48 PM, Niranda Perera  wrote:

> Hi Malith,
>
> Yes, correct! you need to change the script.
>
> Additionally, there are some changes in the carbonJdbc connector as
> well... so, you might need to watch out for it!
>
> Please check with the APIM team and ESB team whether we are doing a
> feature release with the DAS 310 changes?
>
> cheers
>
> On Wed, Sep 21, 2016 at 5:11 AM, Malith Munasinghe 
> wrote:
>
>> Hi All,
>>
>> While preparing a DAS 3.1.0 to run APIM Analytics I have added features
>> as in [1]
>> .
>> After deploying the CApp for APIM Analytics I run in to below error.
>> According to the error that *incrementalProcessing *is not a valid
>> option. Also according to [2]
>>  the syntax
>> to parse this option is *incrementalParams. *In order to get DAS 3.1.0
>> to process APIM Analytics
>> do we have to change the scripts with this option as well ?
>>
>>
>> TID: [-1234] [] [2016-09-21 08:54:00,019] ERROR
>> {org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService} -
>>  Error while executing query : CREATE TEMPORARY TABLE
>> APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
>> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "year INT -i,
>> month INT -i, day INT -i, hour INT -i, minute INT -i,consumerKey
>> STRING, context STRING, api_version STRING, api STRING, version STRING,
>> requestTime LONG, userId STRING, hostName STRING,apiPublisher STRING,
>> total_request_count LONG, resourceTemplate STRING, method STRING,
>> applicationName STRING, tenantDomain STRING,userAgent STRING,
>> resourcePath STRING, request INT, applicationId STRING, tier STRING,
>> throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
>> _timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
>> consumerKey, context, api_version, userId, hostName, apiPublisher,
>> resourceTemplate, method, userAgent, clientIp",incrementalProcessing
>> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
>> {org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService}
>> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
>> Exception in executing query CREATE TEMPORARY TABLE
>> APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
>> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "year INT -i,
>> month INT -i, day INT -i, hour INT -i, minute INT -i,consumerKey
>> STRING, context STRING, api_version STRING, api STRING, version STRING,
>> requestTime LONG, userId STRING, hostName STRING,apiPublisher STRING,
>> total_request_count LONG, resourceTemplate STRING, method STRING,
>> applicationName STRING, tenantDomain STRING,userAgent STRING,
>> resourcePath STRING, request INT, applicationId STRING, tier STRING,
>> throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
>> _timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
>> consumerKey, context, api_version, userId, hostName, apiPublisher,
>> resourceTemplate, method, userAgent, clientIp",incrementalProcessing
>> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
>> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalytics
>> Executor.executeQueryLocal(SparkAnalyticsExecutor.java:764)
>> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalytics
>> Executor.executeQuery(SparkAnalyticsExecutor.java:721)
>> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcesso
>> rService.executeQuery(CarbonAnalyticsProcessorService.java:201)
>> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcesso
>> rService.executeScript(CarbonAnalyticsProcessorService.java:151)
>> at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(
>> AnalyticsTask.java:60)
>> at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute
>> (TaskQuartzJobAdapter.java:67)
>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> at java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:471)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.RuntimeException: Unknown options :
>> incrementalprocessing
>> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelati
>> onProvider.checkParameters(AnalyticsRelationProvider.java:123)
>> at 

Re: [Dev] [DAS][310] Errors while running APIM_INCREMENTAL_PROCESSING_SCRIPT

2016-09-21 Thread Niranda Perera
Hi Malith,

Yes, correct! you need to change the script.

Additionally, there are some changes in the carbonJdbc connector as well...
so, you might need to watch out for it!

Please check with the APIM team and ESB team whether we are doing a feature
release with the DAS 310 changes?

cheers

On Wed, Sep 21, 2016 at 5:11 AM, Malith Munasinghe  wrote:

> Hi All,
>
> While preparing a DAS 3.1.0 to run APIM Analytics I have added features as
> in [1]
> .
> After deploying the CApp for APIM Analytics I run in to below error.
> According to the error that *incrementalProcessing *is not a valid
> option. Also according to [2]
>  the syntax
> to parse this option is *incrementalParams. *In order to get DAS 3.1.0 to
> process APIM Analytics
> do we have to change the scripts with this option as well ?
>
>
> TID: [-1234] [] [2016-09-21 08:54:00,019] ERROR {org.wso2.carbon.analytics.
> spark.core.CarbonAnalyticsProcessorService} -  Error while executing
> query : CREATE TEMPORARY TABLE APIMGT_PERMINUTE_REQUEST_DATA USING
> CarbonAnalytics OPTIONS(tableName 
> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST",
> schema "year INT -i, month INT -i, day INT -i, hour INT -i, minute INT
> -i,consumerKey STRING, context STRING, api_version STRING, api STRING,
> version STRING, requestTime LONG, userId STRING, hostName STRING,
>  apiPublisher STRING, total_request_count LONG, resourceTemplate STRING,
> method STRING, applicationName STRING, tenantDomain STRING,userAgent
> STRING, resourcePath STRING, request INT, applicationId STRING, tier
> STRING, throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
> _timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
> consumerKey, context, api_version, userId, hostName, apiPublisher,
> resourceTemplate, method, userAgent, clientIp",incrementalProcessing
> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
> {org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService}
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
> Exception in executing query CREATE TEMPORARY TABLE
> APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "year INT -i,
> month INT -i, day INT -i, hour INT -i, minute INT -i,consumerKey
> STRING, context STRING, api_version STRING, api STRING, version STRING,
> requestTime LONG, userId STRING, hostName STRING,apiPublisher STRING,
> total_request_count LONG, resourceTemplate STRING, method STRING,
> applicationName STRING, tenantDomain STRING,userAgent STRING,
> resourcePath STRING, request INT, applicationId STRING, tier STRING,
> throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
> _timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
> consumerKey, context, api_version, userId, hostName, apiPublisher,
> resourceTemplate, method, userAgent, clientIp",incrementalProcessing
> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQueryLocal(SparkAnalyticsExecutor.java:764)
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQuery(SparkAnalyticsExecutor.java:721)
> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorServic
> e.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorServic
> e.executeScript(CarbonAnalyticsProcessorService.java:151)
> at org.wso2.carbon.analytics.spark.core.AnalyticsTask.
> execute(AnalyticsTask.java:60)
> at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.
> execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Unknown options :
> incrementalprocessing
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> checkParameters(AnalyticsRelationProvider.java:123)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> setParameters(AnalyticsRelationProvider.java:113)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> createRelation(AnalyticsRelationProvider.java:75)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> createRelation(AnalyticsRelationProvider.java:45)
> at 

[Dev] [DAS][310] Errors while running APIM_INCREMENTAL_PROCESSING_SCRIPT

2016-09-21 Thread Malith Munasinghe
Hi All,

While preparing a DAS 3.1.0 to run APIM Analytics I have added features as
in [1]
.
After deploying the CApp for APIM Analytics I run in to below error.
According to the error that *incrementalProcessing *is not a valid option.
Also according to [2]
 the syntax to
parse this option is *incrementalParams. *In order to get DAS 3.1.0 to
process APIM Analytics
do we have to change the scripts with this option as well ?


TID: [-1234] [] [2016-09-21 08:54:00,019] ERROR
{org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService} -
 Error while executing query : CREATE TEMPORARY TABLE
APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
"ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "year INT -i,
month INT -i, day INT -i, hour INT -i, minute INT -i,consumerKey
STRING, context STRING, api_version STRING, api STRING, version STRING,
requestTime LONG, userId STRING, hostName STRING,apiPublisher STRING,
total_request_count LONG, resourceTemplate STRING, method STRING,
applicationName STRING, tenantDomain STRING,userAgent STRING,
resourcePath STRING, request INT, applicationId STRING, tier STRING,
throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
_timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
consumerKey, context, api_version, userId, hostName, apiPublisher,
resourceTemplate, method, userAgent, clientIp",incrementalProcessing
"APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
{org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService}
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Exception in executing query CREATE TEMPORARY TABLE
APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
"ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "year INT -i,
month INT -i, day INT -i, hour INT -i, minute INT -i,consumerKey
STRING, context STRING, api_version STRING, api STRING, version STRING,
requestTime LONG, userId STRING, hostName STRING,apiPublisher STRING,
total_request_count LONG, resourceTemplate STRING, method STRING,
applicationName STRING, tenantDomain STRING,userAgent STRING,
resourcePath STRING, request INT, applicationId STRING, tier STRING,
throttledOut BOOLEAN, clientIp STRING,applicationOwner STRING,
_timestamp LONG -i",primaryKeys "year, month, day, hour, minute,
consumerKey, context, api_version, userId, hostName, apiPublisher,
resourceTemplate, method, userAgent, clientIp",incrementalProcessing
"APIMGT_PERMINUTE_REQUEST_DATA, HOUR",mergeSchema "false")
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:764)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:721)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unknown options :
incrementalprocessing
at
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.checkParameters(AnalyticsRelationProvider.java:123)
at
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.setParameters(AnalyticsRelationProvider.java:113)
at
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.createRelation(AnalyticsRelationProvider.java:75)
at
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.createRelation(AnalyticsRelationProvider.java:45)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at
org.apache.spark.sql.execution.datasources.CreateTempTableUsing.run(ddl.scala:92)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at

Re: [Dev] Connectivity from ESB to other Apps using SSO

2016-09-21 Thread Manuranga Perera
Hi Srikanth,

It seems the WSO2 Salesforce connector only works if you have a Enterprise,
Unlimited or Developer edition Salesforce account. If you have a Professional
account you will have to buy the API access feature form Salesforce.

Can you please let us know what this edition you are using.

On Wed, Sep 21, 2016 at 10:38 AM, Manuranga Perera  wrote:

> Following mail was send to me by Srikanth, forwarding since dev list has
> rejected, due to him not being resisted.
>
> Hi,
>>
>> I am having issue configuring ESB to talk to SalesForce. Tried following
>> Approaches:
>>
>>
>> *Approach-1*
>>
>> I started connecting to SalesForce using WSO2 SalesForce Connector. I
>> ended up with following error from SalesForce:
>> *Error, "API is not enabled for this Organization or Partner"*
>> Then I tried enabling API's with different accounts like dev/enterprise
>> etc and enabled all possible options to connect to SalesForce. Not very
>> successful :(
>> But major issue is,  all the clients who configure SalesForce may not
>> have API's Enabled. In that case, how we deal with such accounts? Not sure
>> what to do
>>
>> *Approach-2*
>>
>> I thought of connecting to salesforce via Identity Server. Created
>> salesforce identity in IS, and configured sales force accordingly. But I am
>> not clear how to call API of salesforce from ESB, is it possible?
>> If not, please do let me know the best possible way to solve this problem
>> using WSO2 stack.
>>
>> I really appreciate your response. :)
>>
>> --
>> Thanks & Regards,
>>
>> Srikanth Puppala.
>>
>> 6103065998
>>
>
>
> --
> With regards,
> *Manu*ranga Perera.
>
> phone : 071 7 70 20 50
> mail : m...@wso2.com
>



-- 
With regards,
*Manu*ranga Perera.

phone : 071 7 70 20 50
mail : m...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Could not send Long value in ESB 5.0.0 with the payload

2016-09-21 Thread Yashothara Shanmugarajah
Hi Nuwan,

As it is an Optional Parameter, I need to send through the script mediator.
So I can't use payload mediator.

Please find this proxy[1]. Here I didn't use connector. Even though I am
getting in scientific notation.

[1]

http://ws.apache.org/ns/synapse;
   name="checkConnectorScript"
   startOnLoad="true"
   statistics="disable"
   trace="disable"
   transports="https,http">
   
  
 
 

 
 

{

}


 
 payload = mc.getPayloadJSON();
 var requesterId = mc.getProperty('requesterId');
var requesterIdInt =
parseInt(mc.getProperty('requesterId'));
payload["requester_id"] = requesterIdInt;

 mc.setPayloadJSON(payload);
 

   https://wso2yasho.freshdesk.com/api/v2/tickets"/>

 
 
  
  
 
  
   
   



Best Regards,
Yashothara.S
Software Engineer
WSO2
http://wso2.com
https://wso2.com/signature


On Wed, Sep 21, 2016 at 10:48 AM, Nuwan Pallewela  wrote:

> Hi Yashothara,
>
> I think this happens due to the use of script mediator. You do not need to
> use script mediator here. Just use the payload factory mediator to build
> the payload or use data mapper mediator if you need to do more complex
> mapping.
>
> [1] https://docs.wso2.com/display/ESB481/PayloadFactory+Mediator#
> PayloadFactoryMediator-Example2:JSON
>
> Thanks,
> Nuwan
>
> On Wed, Sep 21, 2016 at 10:33 AM, Yashothara Shanmugarajah <
> yashoth...@wso2.com> wrote:
>
>> Please find the template[1] and proxy[2].
>>
>> [1]
>> http://ws.apache.org/ns/synapse;>
>> 
>> 
>> 
>> > expression="$func:requesterId"/>
>> 
>>   > expression="$ctx:uri.var.requesterId"/>
>> 
>> 
>> 
>> {
>>
>> }
>> 
>> 
>>
>> 
>> 
>>
>> 
>> 
>> >  payload = mc.getPayloadJSON();
>>
>>  var requesterId = mc.getProperty("uri.var.requesterId");
>>
>>  if (requesterId != null && requesterId != ""){
>>  var requesterIdInt = parseInt(mc.getProperty("uri.v
>> ar.requesterId"));
>> payload["requester_id"] = requesterIdInt;
>>  }
>>
>>  mc.setPayloadJSON(payload);
>>  ]]>
>> 
>>
>> 
>> 
>> 
>> 
>> 
>>
>> 
>> 
>> 
>>
>> 
>> 
>>
>>
>>
>> [2] 
>> http://ws.apache.org/ns/synapse;
>>name="createTicket"
>>startOnLoad="true"
>>statistics="disable"
>>trace="disable"
>>transports="https,http">
>>
>>   
>>  
>>  
>>  > name="requesterId"/>
>>  
>> >   name="121212121212121212121212121212"/>
>>  
>>  
>> {$ctx:apiKey}
>> {$ctx:apiUrl}
>>  
>>  
>> {$ctx:requesterId}
>>  
>>  
>>   
>>   
>>  
>>  
>>   
>>
>>
>> 
>>
>>
>> Best Regards,
>> Yashothara.S
>> Software Engineer
>> WSO2
>> http://wso2.com
>> https://wso2.com/signature
>> 
>>
>> On Wed, Sep 21, 2016 at 10:20 AM, Malaka Silva  wrote:
>>
>>> Hi Yashothara,
>>>
>>> Can you share the config you used.
>>>
>>> On Wed, Sep 21, 2016 at 9:33 AM, Yashothara Shanmugarajah <
>>> yashoth...@wso2.com> wrote:
>>>
 Hi,

 I need to send JSON payload with long (for e.g 1910655) not as
 String. In the back end it changes as a scientific notation 
 (1.996356E10)
 in ESB 5.0.0. Is there any way to resolve this?

 Thanks.

 Best Regards,
 Yashothara.S
 Software Engineer
 WSO2
 http://wso2.com
 https://wso2.com/signature
 

>>>
>>>
>>>
>>> --
>>>
>>> Best Regards,
>>>
>>> Malaka Silva
>>> Senior Technical Lead
>>> M: +94 777 219 791
>>> Tel : 94 11 214 5345
>>> Fax :94 11 2145300
>>> Skype : malaka.sampath.silva
>>> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
>>> Blog : http://mrmalakasilva.blogspot.com/
>>>
>>> WSO2, Inc.
>>> lean . enterprise . middleware
>>> https://wso2.com/signature
>>> http://www.wso2.com/about/team/malaka-silva/
>>> 
>>> https://store.wso2.com/store/
>>>
>>> Don't make Trees rare, we should keep them with care
>>>
>>
>>
>
>
> --
> --
>
> *Nuwan Chamara Pallewela*
>
>
> *Software 

[Dev] [IoTS] when will the IoTS be GA?

2016-09-21 Thread 云展智创
Hi,

Could anyone tell me when will the IoTS 1.0.0 release be GA? Thanks a lot.

--
Zhanwen Zhou (Jason)
+86 13922218435
zhanwen.z...@smartcloudex.com
Guangzhou Smart Cloudex Technology Co., Ltd.
Business: IOT, API Management

_
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev