On Sun, 1 Feb 2026 at 10:31, Denys Vlasenko via busybox
<[email protected]> wrote:
>
> Please describe exact testcases of the attacks you defend against.
>

Because the patchset was provided in downloadable files, I decided to
check how Gemini 3 Fast would react at this request

https://gemini.google.com/app/2a7a4f75926d930c

As you can see, I did not provided to the AI no any support but simply
answering "yes" (aka yes | gemini -p $initial-prompt). I did few
experiments in the past and the output ALWAYS required manual
intervention, never reaching an acceptable conclusion on its own. In
this case, I am expecting the same pattern of failure (did not check
yet, just shared for the moment). Why should we care?

For those who have a background yet, the AI can stimulate learning
(not good, not bad, by itself: everything depends on the mindset, as
usual). For those who are mastering the subject, it is a quickest and
cheapest option to collect ideas (unless scrutinizing every byte of
the AI output is taken as per default policy, in that case we are
working for AI not the AI for us [*]). Finally, because AI exists and
it is not going to fade away just because we ignore it, thus better
know. By the way, this doesn't mean that everybody should know
(everything) but it matters that someone does.

[*] this claim has several exceptions outside coding, for example in
text generation in which the human review is essential and mandatory.
Within coding, when the AI is asked to create a canvas which let us
start coding without facing the blank-page wall or simply because the
starting canvas is so "common" that it is fine to delegate to an
automated tool rather than writing the same boring cycles and error
checks in which usually humans fail more than AI because typos or
similar attention/keyboard glitches.

Best regards, R-
_______________________________________________
busybox mailing list
[email protected]
https://lists.busybox.net/mailman/listinfo/busybox

Reply via email to