I've done some work recently on GPs, which addresses some of the
discussed issues. A pull-request containing this code can be found under
https://github.com/scikit-learn/scikit-learn/pull/4270
Feedback would be very welcome.
Best,
Jan
On 25.11.2014 18:56, Kyle Kastner wrote:
> Gradient based opti
+1 for using GPs for hyperparameter search algorithms and for adding
gradient-based optimization.
I would like to add that it would be nice to have less redundancy
between the GP's correlation_models and the kernels in pairwise.
Additionally, it would be nice if kernels could be specified external
Gradient based optimization (I think this might be related to the recent
variational methods for GPs) would be awesome.
On Tue, Nov 25, 2014 at 12:54 PM, Mathieu Blondel
wrote:
>
>
> On Wed, Nov 26, 2014 at 2:37 AM, Andy wrote:
>
>>
>> What I think would be great to have is gradient based optim
On Tue, Nov 25, 2014 at 12:50:13PM -0500, Kyle Kastner wrote:
> The HODLR (Hierarchical Off Diagonal Low Rank approximation) solver is a low
> rank approximation technique that allows you to use a Shannon-Woodbury update
> (lots of Google references) for efficiently updating the inverse of the GP
>
On Wed, Nov 26, 2014 at 2:37 AM, Andy wrote:
>
> What I think would be great to have is gradient based optimization of
> the kernel parameters
+1
This is one of the most appealing features of GPs IMO.
Mathieu
--
Downl
France is nice and probably a lot warmer than "new France" right now :)
The HODLR (Hierarchical Off Diagonal Low Rank approximation) solver is a
low rank approximation technique that allows you to use a Shannon-Woodbury
update (lots of Google references) for efficiently updating the inverse of
the
There are definitely API questions that I also just discusses with Dan.
There are some thing that we could improve, but I think the solutions
depend a lot on if we want to do kernel engineering or not.
My thinking was that this part is probably the most controversial one,
so this is what I asked
On Tue, Nov 25, 2014 at 12:23:50PM -0500, Kyle Kastner wrote:
> specifically a HODLR solver.
What is this. Can you tell us more?
> One very specific reason to focus on GP code quality would be that it
> opens the door to use sklearn's own code to implement some very nice
> hyperparameter search a
I have some familiarity with the GP stuff in sklearn, but one of the big
things I really *want* is something much more like George - specifically a
HODLR solver. Maybe it is outside the scope of the project, but I think GPs
in sklearn could be very useful and computationally tractable for "big-ish"
On Tue, Nov 25, 2014 at 12:13:53PM -0500, Andy wrote:
> the way that is done in GPy? Or should we leave that to GPy? Then the
> question is how useful our implementation is without it :-/
Wait a second. Our implementation is crap. That is clear. It's is still
very useful. Claiming that it is not
10 matches
Mail list logo