————— 2022-6-2 —————

Heisenberg 20:47
Is it simpler to implement rpc requests?

peacewong@WDS 20:47
The rpc dependency is too strong, and the case where the ds is deployed alone 
can be considered.

peacewong@WDS 20:48
And ds will provide access to other services, it is better to unify the 
interface.

Heisenberg 20:48
Mmm, I understand

Heisenberg 23:10
Brother Ping, I'm sorry to bother you, 😄, please take a look at these two 
issues when you have time
https://github.com/apache/incubator-linkis/issues/2210
https://github.com/apache/incubator-linkis/issues/2211

————— 2022-6-3 —————

Heisenberg 11:57
@peacewong@WDS Pingge When using the client API to test locally, it is easy to 
report this exception

Heisenberg 11:57
Image 1 (can be viewed in the attachment)

Heisenberg 11:58
Exception in thread "main" org.apache.http.NoHttpResponseException: 
localhost:9001 failed to respond

The service is no problem, it starts normally, and the postman test is no 
problem

Heisenberg 11:58
val clientConfig = DWSClientConfigBuilder.newBuilder()
.addServerUrl("http://127.0.0.1:9001";) //set linkis-mg-gateway url: 
http://{ip}:{port}
.connectionTimeout(30000) //connection timtout
.discoveryEnabled(false) //disable discovery
.discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
.loadbalancerEnabled(false) // enable loadbalancer
.maxConnectionSize(5) // set max Connection
.retryEnabled(false) // set retry
.readTimeout(30000) //set read timeout
.setAuthenticationStrategy(new TokenAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authen suppory static and Token
.setAuthTokenKey("Token-Code") // set submit user
.setAuthTokenValue("DSM-AUTH") // set passwd or token
.setDWSVersion("v1") //linkis rest version v1
.build()

Heisenberg 11:58
I don't know if you guys have encountered this kind of abnormality [laughs 
through tears]

peacewong@WDS 12:13
See no response? Is the request wrong?

case 12:41
What is the scenario like? After each test creates a client, multiple requests 
will be run? This feels like a tcp connection multiplexing disconnection problem

Heisenberg 12:53
Image 2 (can be viewed in the attachment)

Heisenberg 12:54
Construct client in main to request service interface

Heisenberg 12:55
I have turned on the vpn proxy and the global proxy, which will also affect the 
network requests sent from the command line. The data source service is 
actually not local, but on a remote machine.

Heisenberg 12:56
This may happen, but I haven't reproduced it locally. In some scenarios, 
httpclient may also have such a problem. I googled it, and many blogs also said 
the same.

————— 2022-6-4 —————

Heisenberg 20:39
@peacewong@WDS Brother Ping, who refreshed engine resources in 1.0.3
curl -d '{"method": "/enginePlugin/engineConn/refreshAll"}' -H 'Content-Type: 
application/json' http://127.0.0.1:9103/api/rest_j/v1/rpc/receiveAndReply

Is it deprecated in 1.1.x?

Heisenberg 20:58
The refresh engine in 1.1.x uses the interface of the EP service

http://gateway IP:9001/api/rest_j/v1/engineplugin/refeshAll

Heisenberg 21:02
When using rpc's rest api to refresh engine materials in 1.1.x, a null pointer 
exception will be reported
https://github.com/apache/incubator-linkis/issues/2052
Under the BaseRPCSender.getInstanceInfo method,
val name = map.get("name").toString

peacewong@WDS 21:04
It is modified, newly added restful

peacewong@WDS 21:04
In version 1.1.2, this restful is updated after the document

peacewong@WDS 21:04
Support refresh all, also support refresh specified engine and version

Heisenberg 21:05
Hmm, the newly added interface is valid, and the documentation is also 
available, no problem, but there is a problem when the method in 1.0.3 is used 
again
http://127.0.0.1:9103/api/rest_j/v1/rpc/receiveAndReply

Heisenberg 21:06
Image 3 (can be viewed in the attachment)

Heisenberg 21:06
null pointer here

peacewong@WDS 21:06
Yes, because the message-schedule er is deprecated

peacewong@WDS 21:06
You can see how it is compatible later

Heisenberg 21:07
I think the way in 1.0.3 is risky, because /rpc/receiveAndReply can be called 
directly without user permission to intercept

peacewong@WDS 21:08
Yes, it needs to be optimized

peacewong@WDS 21:08
It has been considered before that both the rpc interface and the restful 
interface go to the same location. So with restful, go directly to the rpc 
interface.

Heisenberg 21:09
It does not go to a certain instance directly connected to the gateway, and 
feels that a mechanism similar to interception can be added at present. Such a 
call will first return a fixed prompt type information

peacewong@WDS 21:09
It is better to separate it, and a separate restful refresh interface is 
recommended later.

Heisenberg 21:09
Uh-huh


—————  2022-6-2  —————

海森堡 20:47
实现rpc请求更简单?

peacewong@WDS 20:47
rpc依赖性太强了,可以考虑ds单独部署的情况。

peacewong@WDS 20:48
而且ds会提供给其他服务访问,把接口统一下也好点。

海森堡 20:48
嗯嗯 了解啦

海森堡 23:10
平哥,打扰了,😄,有空帮忙看看这俩issue
https://github.com/apache/incubator-linkis/issues/2210
https://github.com/apache/incubator-linkis/issues/2211

—————  2022-6-3  —————

海森堡 11:57
@peacewong@WDS 平哥 使用client API 本地测试的时候,容易报这个异常

海森堡 11:57
图片1(可在附件中查看)

海森堡 11:58
Exception in thread "main" org.apache.http.NoHttpResponseException: 
localhost:9001 failed to respond

服务是没问题的,正常启动的,postman测试都没问题

海森堡 11:58
val clientConfig = DWSClientConfigBuilder.newBuilder()
.addServerUrl("http://127.0.0.1:9001";) //set linkis-mg-gateway url: 
http://{ip}:{port}
.connectionTimeout(30000) //connection timtout
.discoveryEnabled(false) //disable discovery
.discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
.loadbalancerEnabled(false) // enable loadbalance
.maxConnectionSize(5) // set max Connection
.retryEnabled(false) // set retry
.readTimeout(30000) //set read timeout
.setAuthenticationStrategy(new TokenAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authen suppory static and Token
.setAuthTokenKey("Token-Code") // set submit user
.setAuthTokenValue("DSM-AUTH") // set passwd or token
.setDWSVersion("v1") //linkis rest version v1
.build()

海森堡 11:58
不知道各位大佬有遇到过这种异常嘛[破涕为笑]

peacewong@WDS 12:13
看是没有响应?是不是请求错了?

casion 12:41
场景是怎么样的呀,每次测试创建client后,会run多个请求?这种感觉像是tcp连接复用断连问题

海森堡 12:53
图片2(可在附件中查看)

海森堡 12:54
在main 中构造client 来请求服务接口

海森堡 12:55
我开了vpn代理,全局代理,也会影响命令行发出去的网络请求,数据源服务其实也不是在本地,而是在远程机器上

海森堡 12:56
这个可能会发生的,只是我本地没有复现出来,在某些场景下httpclient 可能也会产生这样的问题,我谷歌了下,很多博客中也这样说

—————  2022-6-4  —————

海森堡 20:39
@peacewong@WDS 平哥,1.0.3中刷新engine resources的
curl -d '{"method": "/enginePlugin/engineConn/refreshAll"}' -H 'Content-Type: 
application/json' http://127.0.0.1:9103/api/rest_j/v1/rpc/receiveAndReply

在1.1.x中废弃了嘛?

海森堡 20:58
1.1.x中刷新引擎使用的是EP服务的接口

http://网关IP:9001/api/rest_j/v1/engineplugin/refeshAll

海森堡 21:02
在1.1.x中使用rpc的rest api来刷新引擎物料的时候,会报空指针异常
https://github.com/apache/incubator-linkis/issues/2052
BaseRPCSender.getInstanceInfo方法下,
val name = map.get("name").toString

peacewong@WDS 21:04
是修改了,新增加了restful

peacewong@WDS 21:04
在1.1.2版本,文档后面更新下这个restful

peacewong@WDS 21:04
支持刷新全部,也支持刷新指定引擎和版本

海森堡 21:05
嗯嗯 新增的这个接口有效的,文档也有的,没问题的,只是1.0.3中的方式再使用的时候有问题
http://127.0.0.1:9103/api/rest_j/v1/rpc/receiveAndReply

海森堡 21:06
图片3(可在附件中查看)

海森堡 21:06
这个地方空指针

peacewong@WDS 21:06
是的,因为message-schedule er废弃了

peacewong@WDS 21:06
后面可以看看怎么兼容下

海森堡 21:07
我觉得 1.0.3中的方式有风险,因为/rpc/receiveAndReply 是可以直接调用的,没有用户权限拦截

peacewong@WDS 21:08
是的,得优化下

peacewong@WDS 21:08
之前有考虑过rpc接口和restful接口都走向同一个位置。所以有了restful直接走rpc接口了。

海森堡 21:09
它不走网关 直接对接的某个instance,觉得目前可以加一个类似拦截的机制,这样的调用,先返回一个固定提示类型信息

peacewong@WDS 21:09
还是分开好点,后面还是推荐单独的restful刷新接口

海森堡 21:09
嗯嗯

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@linkis.apache.org
For additional commands, e-mail: dev-h...@linkis.apache.org

Reply via email to