> 2. Models that do not do output filtering to restrict the reproduction of 
> training data unless the tool can ensure the output is license compatible?
> 
> 2 would basically prohibit locally run models.


I am not for this for the reasons listed above. There isn’t a difference 
between this and a contributor copying code and sending our way. We still need 
to validate the code can be accepted .

We also have the issue of having this be a broad stroke. If the user asked a 
model to write a test for the code the human wrote, we reject the contribution 
as they used a local model? This poses very little copywriting risk yet our 
policy would now reject

Sent from my iPhone

> On Jun 25, 2025, at 9:10 AM, Ariel Weisberg <ar...@weisberg.ws> wrote:
> 
> 2. Models that do not do output filtering to restrict the reproduction of 
> training data unless the tool can ensure the output is license compatible?
> 
> 2 would basically prohibit locally run models.

Reply via email to