I agree with Todd's sentiments on unnecessarily coupling feature priority with the availability of a patch. There's patches that we've developed internally, then threw away because we couldn't tie it to a production use case or thought a better design might be necessary. There's also patches that we've had simmering out for a couple weeks so we could think about it and cross-talk. Every new feature adds to the complexity of an already-complex system. We should make sure the feature has demand.
I don't think tying the maturity of a patch merely to a metric as simple as the presence of a unit test. Unit tests are great, but they're a means instead of an ends to quality. I've seen super-buggy code that had a number of unit tests. I've seen code that passed the unit test, but the test assumptions were wrong. I've seen code that failed a unit test because the unit test was horribly written. There's a lot of metrics that make a feature more palatable. How are you using this? Can you give examples? Is it in production or just something you wanted? How actively is this being used? What sort of scaling is being tested here? These are questions that contributors should be available to answer if they submit a large diff that impacts a critical section of code and will be a necessary design decision for many future features.