Hi Liam, I would suggest start out by taking a look at the gRPC quickstart for 
Python at, https://grpc.io/docs/quickstart/python.html and then modifying that 
example to do what you would like.

The Flask server would launch the separate process using multiprocessing. The 
model process would create a gRPC service endpoint. The Flask server would wait 
for the model process to start and then establish a gRPC connection as a client 
to the gRPC service endpoint of the model process. The gRPC service of the 
model process would have methods, such as trainModel or getModelStatus, … When 
an http request occurs on the Flask http server, the server would then invoke 
the gRPC methods in the model process.

I hope that helps.

Regards --Roland


From: Liam Geron <[email protected]>
Date: Thursday, December 20, 2018 at 9:53 AM
To: Roland Hochmuth <[email protected]>
Cc: Scikit-learn mailing list <[email protected]>
Subject: Re: [scikit-learn] How to keep a model running in memory?

Hi Roland,

Thanks for the suggestion! I'll certainly look into gRPC or similar frameworks. 
Currently we have multiprocessing, but it's not used to that same extent. How 
would the second process have a sort of "listener" to respond to incoming 
requests if it is running persistently?

Thanks so much for the help.

Best,
Liam

On Thu, Dec 20, 2018 at 11:12 AM Roland Hochmuth 
<[email protected]<mailto:[email protected]>> wrote:
Hi Liam, Not sure I have the complete context for what you are trying to do, 
but have you considered using Python multiprocessing to start a separate 
process? The lifecycle of that process could start when the Flask server 
starts-up or on the first request. The separate process would load and run the 
model. Depending on what you would like to do, some form of IPC mechanism, such 
as gRPC could be used to control or get updates from the model process.

Regards --Roland


From: scikit-learn 
<[email protected]<mailto:[email protected]>>
 on behalf of Aneto <[email protected]<mailto:[email protected]>>
Reply-To: Scikit-learn mailing list 
<[email protected]<mailto:[email protected]>>
Date: Thursday, December 20, 2018 at 8:21 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Cc: Liam Geron <[email protected]<mailto:[email protected]>>
Subject: [scikit-learn] How to keep a model running in memory?

Hi scikit learn community,

We currently use scikit-learn for a model that generates predictions on a 
server endpoint. We would like to keep the model running in memory instead of 
having to re-load the model for every new request that comes in to the server.

Can you please point us in the right direction for this? Any tutorials or 
examples.

In case it's helpful, we use Flask for our web server.

Thank you!

Aneto
_______________________________________________
scikit-learn mailing list
[email protected]
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to