this model appears to have a quirk where it might wrap output code in
[python][/python] or such

it could be possible to automatically evaluate such code, and label it
"fake code" for example, if it does not actually run

or we could just call it all "fake code" since it is made with a language model

Reply via email to