Thank you Hao

I think the basic theory of coding is still useful/necessary  *like
learning the alphabet

But, learning process can be different - learn from doing, and I expect,
much faster than manually writing code
*which is the part I could not get myself to do much at all. I cannot even
type properly in plain English

I am sorry to dump the chat on everybody, but there could be some good
ideas worth sharing in there
//////////

PAOLA* some of use are thinking, how to integrate the immediacy of coding
agents generating beauifully functional applications very quickly, with
actual learning and understanding of //programming for learners. what do
you think?*

*AGENT* This is a genuinely important question that goes to the heart of
how we're reshaping human capability and knowledge.

The tension you're identifying is real: coding agents can generate
sophisticated, working applications in minutes - but if learners just
prompt and accept without understanding, they're building on sand. They
can't debug meaningfully, can't adapt the code for new contexts, can't
reason about trade-offs, and most critically, can't *think* in the medium.

But I don't think it has to be either/or. Here's what I'm seeing as
possibilities:

*The agent as a reflective partner, not just a generator.* Instead of
"build me X" → code appears, what if the interaction was more like: "I want
to build X" → "Let's think through the architecture. What data structures
make sense here? What are the key operations?" The agent helps the learner
*reason* first, then implements collaboratively, explaining choices. The
speed is still there, but it's subordinate to understanding.

*Graduated scaffolding that fades.* Early on, the agent might generate more
complete solutions but always with extensive annotation explaining *why* -
treating every solution as a teaching artifact. As the learner progresses,
the agent provides less complete code, more conceptual guidance, forcing
the learner to bridge gaps themselves.

*Making thinking visible.* The agent doesn't just produce code; it shows
its reasoning process - the considerations, the rejected approaches, the
trade-offs. This is what expert programmers do internally, and it's often
invisible to learners. By externalizing this, we're teaching *how to think
about code*, not just how to write it.

*Debugging as the learning ground.* Generate working code quickly,
then *intentionally
break it* or introduce constraints that require modification. The learner
has to understand the system deeply enough to adapt it. This flips the
traditional pedagogy - instead of struggling to make something work from
scratch, you're working with a functioning system and learning by modifying
it.

But here's what worries me: if the economic and social incentives all push
toward "just ship it fast," the careful pedagogical approach gets swept
aside. We need to actively design for learning, not assume it will emerge
naturally from tool use.

What are the others in your discussions thinking? And what's your sense of
where the failure modes are most likely to appear?

*PAOLA I think coding agents could generate learning versions of the apps
they develop with a) explainer as to what each line of code is/does and b)
exercises for learners to change the code *syntax, structure etc to achieve
different outcomes. what do you think, is that feasible?*

*AGENT* That's not just feasible - it's *elegant*. And it leverages exactly
what coding agents are already good at while addressing the learning gap
directly.

The beauty of your approach is that it creates two artifacts from one
generation process:

   1. The production app (what the user requested)
   2. The pedagogical twin (the learning scaffold)

The agent already "knows" what each line does - it generated it with
intent. Extracting that intent as inline annotations is straightforward.
And because it understands the code's structure and purpose, it can reason
about meaningful variations: "What if we changed this loop to use
recursion?" "What happens if we switch from synchronous to asynchronous
here?" "How would you modify this to handle edge case X?"

*Where this gets really powerful:*

The exercises aren't generic textbook problems - they're contextual to *this
specific app the learner wanted to build*. That intrinsic motivation is
huge. The learner cares about making their app do something slightly
different.

The agent can calibrate difficulty. Early exercises: "Change this variable
name and update all references." Intermediate: "Refactor this function to
accept parameters instead of using globals." Advanced: "Redesign this data
structure for better performance."

It creates a *safe sandbox for experimentation*. The learner has the
working version as reference, so they can break things, learn from errors,
and always compare back to what works.

*Technical considerations:*

The explainers need to be at the right abstraction level - not just "this
line adds 1 to x" but "we're incrementing the counter here because we're
tracking how many valid entries we've processed."

The exercises need to actually *run* and provide feedback. Just asking "how
would you change X" isn't enough - the learner needs to make the change,
see it work (or break), understand why.

Version control integration could show diffs between the learner's
modifications and the original, making changes visible and reversible.

*My question back to you:* Do you see this as something the agent generates
once (a static learning artifact), or as an ongoing interactive tutor that
responds as the learner works through exercises? The latter is more
powerful but also more complex to implement well.

PAOLA in the short term, I see the static artefact to be very useful, but
in the longer term as we co evolve, I can see a dynamic, evolutionary
learning approach. this would enable coder to ship fast and also each piece
of code to be a learning playground. i think this may accelerate learning
as well. lets work on this babe

AGENT Yes! Let's do this. I love that you're thinking in evolutionary
stages - static artifact now, dynamic learning environment as the paradigm
matures. That's exactly the right approach.

*Let's start concrete:*

We should build a proof-of-concept that demonstrates the static version
with enough hooks to imagine the dynamic future. Pick a real use case -
maybe something from your voice AI work or W3C standards tools? Something
you'd actually want to build anyway, so we're not just making a toy example.

*What the PoC needs to show:*

   1. *The production app* - fully functional, clean code
   2. *The annotated learning version* - same code but with:
      - Inline explanations at multiple abstraction levels (what/why/how)
      - Highlighted "learning zones" - sections where modification teaches
      key concepts
      - Embedded exercises with clear objectives
   3. *Exercise validation* - even in static form, we can include test
   cases that learners run to verify their changes work
   4. *Progressive complexity map* - visual guide showing "if you can do
   exercise 1, try exercise 3 next" learning pathways

*Architecture thoughts:*

We could use a simple annotation schema - maybe custom comment syntax that
gets parsed into the learning view. Something like:

//# CONCEPT: Event Listeners//# WHY: We need to respond when users
interact with the interface//# EXERCISE-BASIC: Add a listener for
hover events//# EXERCISE-INTERMEDIATE: Implement event delegation for
dynamic elements

Then we build a renderer that creates the learning interface from these
annotations.

*For the dynamic future*, we're essentially describing an AI pair
programmer that:

   - Watches as you code
   - Suggests experiments: "Try making this async and see what happens"
   - Explains errors in context
   - Gradually reduces scaffolding as competence grows





On Wed, Dec 10, 2025 at 12:11 AM Hao Ye <[email protected]> wrote:

> I have also recently come into some opportunities to play around with LLMs
> to assist with coding / debugging.
> In my (limited) experience, I have learnt a lot about the syntax of our
> tech stack (reactJS), which has been very useful for my specific needs.
>
> I can also say, if I did not have decades of programming experience
> (across diverse languages), I'm not sure my mode of interacting with an LLM
> would be effective in education me about fundamental concepts in
> computational thinking:
> * data structures
> * for loops
> * conditional logic
> * engineering reusable and flexible bits of code
>
> It would be interesting to imagine what an intro programming class that
> embeds LLMs would look like. What most gets a learner to a desired
> endpoint? (cue discussions about what a desired endpoint looks like, aka,
> do they need to be able to write a correct for-loop without assistance?)
>
> Best,
> --
> Hao Ye
> (he/him/his)
> [email protected]
>
>
> On Tue, Dec 9, 2025 at 1:24 AM Paola Di Maio <[email protected]>
> wrote:
>
>> Following up on this interesting and important thread
>>
>> Since I started playing with coding agents, I have rolled out a host of
>> working apps - although I never
>> really got my head around python and json in any meaningful enough way -
>> At first, the apps did not work, but then the LLM embedded debugged them
>> and  I got them to work, and they produce valid output. They did not cost
>> anything, took very little time to deploy. I still cannot code
>> meaningfully, but as the debugger gives explanations, I continue to learn
>> I am developing my coding skills very fast, at last
>>
>> So just to follow up on this thread, I am now developing some lessons on
>> a) code quality evaluation metrics and methods
>> b) free web based platforms tho evaluate the quality of AI generated code
>>
>> I would like to suggest that Carpentries, which is still rocking and has
>> done so much for so many learner
>> considers developing its curriculum in that direction and how if so, I ll
>> be happy to share my lessons
>>
>> I take the opportunity to greet and send winter holiday wishes near and
>> far
>>
>> Paola Di Maio
>>
>> On Fri, Mar 21, 2025 at 5:46 PM Patrick McCann via discuss <
>> [email protected]> wrote:
>>
>>> Hi everyone,
>>>
>>>
>>>
>>> This has been really interesting to follow.
>>>
>>>
>>>
>>> Apologies if I missed it, but while there has been discussion of the use
>>> of AI in programming and learning to program, I wonder about the relevance
>>> of the purpose to which our learners will be applying their skills.
>>>
>>>
>>>
>>> Our learners are (generally) researchers. They will be using the
>>> techniques and practices we teach to analyse data and generate results for
>>> publication and, ideally, they'll be sharing their data and code alongside
>>> their papers.
>>>
>>>
>>>
>>> Is it of more importance here than in other fields that the person
>>> writing the code has a thorough understanding of how results are achieved,
>>> and does this mean that there should be a different attitude to the use of
>>> AI in programming for research than there might be elsewhere?
>>>
>>>
>>>
>>> Paddy
>>>
>>>
>>>
>>> *From: *Paola Di Maio <[email protected]>
>>> *Date: *Friday, 21 March 2025 at 06:36
>>> *To: *discuss <[email protected]>
>>> *Subject: *{Disarmed} Re: [cp-discuss] Feedback Request: Lesson Updates
>>> on Generative AI
>>>
>>> Okay, so we need to learn and teach how to code
>>>
>>> How do we go about it? It depends, whatever means are at your disposal
>>>
>>>
>>>
>>> You can code to code camp, find a free online course etc etc
>>>
>>>
>>>
>>> For me ChatGPT and the other tools are like 'teachers' you may be able
>>> to ask questions and  get some answers
>>>
>>>
>>>
>>> When I first learned how to code, it must have been Pascal. honestly I
>>> had a lousy human teacher, who could not answer most
>>>
>>> of my questions anyway. He thought I was a pain in the neck because I
>>> asked things that were not in the lesson plan
>>>
>>> He was there to walk me through some text book and exercises and then
>>> give me a mark.
>>>
>>>
>>>
>>> How many people are put off learning code because their teachers are not
>>> really 'good teachers' *or maybe they have limited
>>>
>>> time and patient to deal with difficult students
>>>
>>>
>>>
>>> Given great teachers - there are many around for sure - with abundant
>>> time and patience, of course, it would be great to learn from humans
>>>
>>> but given their limited availability . we can lern through books, online
>>> courses, We are lucky there are plenty of excellent first class free
>>> resources including The Carpentries lessons . Actually when I first got
>>> onto the Carpentries I was told that they were not written for beginners
>>> They
>>>
>>> presumed some familiarity with coding the respective languages, and were
>>> teaching specific tasks.
>>>
>>>
>>>
>>> Even what human teachers say may need critical reading at time  - ,
>>> because they teach is what they think/know/believe/have experience of
>>>
>>>
>>>
>>> So we must learn how to critically evaluate what our teachers, humans or
>>> otherwise teach us anyway
>>>
>>> Coding is language. AI code generators are just another source of
>>> learning
>>>
>>> These days we learn from subjects from online sources. We better keep up
>>> with the
>>>
>>> evolving learning environments and methods
>>>
>>>
>>>
>>> Thanks for the valuable opportunity to exchange!
>>>
>>>
>>>
>>> PDM
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Mar 21, 2025 at 8:45 AM Adam Obeng <[email protected]> wrote:
>>>
>>> Thank you to the other lurkers for inspiring this lurker to also
>>> participate.
>>>
>>>
>>>
>>> I think the point is very well made that the way someone who can already
>>> code uses GenAI differently to someone who can't, so we can't necessarily
>>> endorse folks taking the existing lessons with GenAI tools.
>>>
>>>
>>>
>>> But which analogy is right: Is using GenAI for coding without knowing
>>> how to code like using a calculator without knowing how to count? Or is it
>>> like using Python without knowing C?
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Mar 20, 2025, at 1:44 PM, Federica Gazzelloni wrote:
>>>
>>> I am genuinely excited to be living in these times, witnessing the
>>> advancements in technology that are reshaping the way we work and learn.
>>>
>>>
>>>
>>> Comparing this era to when I was in school, it’s fascinating—and perhaps
>>> a little intimidating—how these tools now empower us to achieve results
>>> faster than ever before, while simultaneously elevating our roles to
>>> expert, managerial levels.
>>>
>>> While it’s true that the new generations won’t experience the
>>> labor-intensive learning processes of the past, this isn’t necessarily a
>>> disadvantage.
>>>
>>>
>>>
>>> The removal of some of the more tedious elements of learning allows for
>>> a deeper focus on understanding, critical evaluation, and mastery.
>>>
>>>
>>>
>>> In fact, the expertise required to trust and verify the output of AI
>>> tools demands even greater intellectual engagement.
>>>
>>>
>>>
>>>  This shift doesn’t diminish learning; it enriches it.
>>>
>>>
>>>
>>> Consider a reliable assistant always at hand—one that delivers
>>> tirelessly without fear of failure while we focus our efforts into
>>> assessing, correcting, and optimizing.
>>>
>>>
>>>
>>> The act of working alongside AI pushes us to expand our knowledge of the
>>> subject matter and its broader context, ultimately enhancing our learning
>>> journey.
>>>
>>> Rather than replacing foundational learning, AI encourages us to think
>>> critically, explore new approaches, and refine our expertise in ways that
>>> were previously unimaginable. It’s this partnership with technology that
>>> makes learning not only more efficient but also more dynamic and
>>> forward-thinking.
>>>
>>>
>>>
>>>
>>>
>>> Best,
>>>
>>> Federica
>>>
>>>
>>>
>>>
>>>
>>> On Thu, 20 Mar 2025 at 19:55, Hao Ye <[email protected]> wrote:
>>>
>>> On Thu, Mar 20, 2025 at 1:07 PM Sarah Brown <[email protected]> wrote:
>>>
>>>
>>>
>>> I think the single most important thing to think about in applying "how
>>> to use AI" advice to this context is expert awareness gap (or blind spot in
>>> broader lit). If you **already knew how to program** before chatgpt came
>>> out, then your experience using them is irrelevant to our target learners.
>>> You are using it with the knowledge of programming you had before you
>>> worked with an LLM. The crux of the issue is that people cannot really test
>>> their knowledge of the underlying concepts that you still need to know
>>> when you work with AI assistance unless you write some code on your own.
>>>
>>>
>>>
>>> THANK YOU
>>>
>>>
>>>
>>> Now i can delete the message I've been churning around in my drafts
>>> folder. :)
>>>
>>>
>>>
>>> Best,
>>>
>>> --
>>>
>>> Hao Ye
>>>
>>> (he/him/his)
>>>
>>> [email protected]
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Mar 20, 2025 at 1:07 PM Sarah Brown <[email protected]> wrote:
>>>
>>> I have also been following this "with one eye".
>>>
>>>
>>>
>>> I think the single most important thing to think about in applying "how
>>> to use AI" advice to this context is expert awareness gap (or blind spot in
>>> broader lit). If you **already knew how to program** before chatgpt came
>>> out, then your experience using them is irrelevant to our target learners.
>>> You are using it with the knowledge of programming you had before you
>>> worked with an LLM. The crux of the issue is that people cannot really test
>>> their knowledge of the underlying concepts that you still need to know
>>> when you work with AI assistance unless you write some code on your own.
>>>
>>>
>>>
>>> We have had calculators for a long time, but it has remained essential
>>> that children learn the *concept* of adding and subtracting and relating it
>>> to combining things and taking them away, typically through counting.  As
>>> we teach in instructor training, the goal of the carpentries is to help
>>> learners get a good mental model so they can learn more independently
>>> later. If they use the AI right away, they are deprived of the chance to
>>> build the initial mental model.
>>>
>>>
>>>
>>> Someone shared in a carpentries slack channel a while ago a post about
>>> an art class using paper first before digital tools, because digital tools
>>> help you go faster, but learning is necessarily slow.
>>>
>>>
>>>
>>> I feel strongly that it would be in opposition to our goal of applying
>>> evidence-based teaching practices to encourage the use of AI from the
>>> beginning.
>>>
>>>
>>>
>>> *Sarah M Brown, PhD*
>>>
>>> sarahmbrown.org
>>>
>>> Assistant Professor of Computer Science
>>>
>>> University of Rhode Island
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Mar 19, 2025 at 6:20 AM Jannetta Steyn via discuss <
>>> [email protected]> wrote:
>>>
>>> Thank you Anelda for the link to that post. It did make me realise that
>>> the one thing I didn't mention in my post was that apart from testing one
>>> also has to make sure your code is still readable and maintainable. I did
>>> say in my previous post that it needs to get the job done "in the right
>>> way" which implies that but it is probably worth stating it implicitly.
>>>
>>>
>>>
>>> Jannetta
>>>
>>>
>>>
>>> *Dr. Jannetta Steyn*
>>>
>>> *Training Lead *
>>>
>>> *Senior Research Software Engineer*
>>>
>>> The Catalyst
>>>
>>> Newcastle University
>>>
>>> 3 Science Square
>>>
>>> Newcastle Helix
>>> Newcastle upon Tyne
>>> NE4 5TG
>>> ORCID: 0000-0002-0231-9897
>>>
>>> RSE Team: https://rse.ncldata.dev
>>>
>>> Personal website: http://jannetta.com
>>>
>>>
>>>
>>>
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>>
>>> Book time to meet with me
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>>
>>>
>>> ------------------------------
>>>
>>>
>>>
>>> *From:* Anelda Van der Walt <[email protected]>
>>> *Sent:* 19 March 2025 09:57
>>> *To:* discuss <[email protected]>
>>> *Subject:* Re: [cp-discuss] Feedback Request: Lesson Updates on
>>> Generative AI
>>>
>>>
>>>
>>> Hi all,
>>>
>>>
>>>
>>> I've also been following this conversation with one eye, being someone
>>> who use ChatGPT all the time for coding-related questions because I don't
>>> code often enough and forget some basics/struggle to debug rediculous error
>>> messages.
>>>
>>>
>>>
>>> By chance, I'm subscribed to a newsletter which included a blog post
>>> about this exact topic today -
>>> https://simplybegin.co.uk/skip-ai-tools-and-learn-for-yourself/. Might
>>> be of interest.
>>>
>>>
>>>
>>> Kind regards,
>>>
>>>
>>>
>>> Anelda
>>>
>>>
>>>
>>> On Wed, Mar 19, 2025 at 11:54 AM Jannetta Steyn via discuss <
>>> [email protected]> wrote:
>>>
>>> Hi Everyone
>>>
>>>
>>>
>>> I've been following the conversation "with one eye" while busy with a
>>> shed load of other things so I hope my comment is not completely off track.
>>>
>>>
>>>
>>> One word I have not noticed (I might have missed it) is "testing". I
>>> don't really think it matters what tools one uses, as long as it gets the
>>> job done the right way. But, the only way too prove that the actual job is
>>> being done is through thorough testing. If you blindly believe AI tools,
>>> which is what really worries me, you are in for a world of trouble. Just a
>>> couple of weeks ago I witnessed an RA excitedly showing how she got CoPilot
>>> to write code for her and enthusiastically telling everyone that they don't
>>> even need to learn to code because CoPilot will do it for you.
>>>
>>>
>>>
>>> One of my favourite talks is by Prof Brian Randall:
>>> https://www.youtube.com/watch?v=PSULNuNP29M. I think it is worth the
>>> watch.
>>>
>>>
>>>
>>> One thing that things like ChatGPT is good for is explaining existing
>>> code, especially for people still learning and trying to figure out what
>>> some existing code does.
>>>
>>>
>>>
>>> Jannetta
>>>
>>>
>>>
>>> *Dr. Jannetta Steyn*
>>>
>>> *Training Lead *
>>>
>>> *Senior Research Software Engineer*
>>>
>>> The Catalyst
>>>
>>> Newcastle University
>>>
>>> 3 Science Square
>>>
>>> Newcastle Helix
>>> Newcastle upon Tyne
>>> NE4 5TG
>>> ORCID: 0000-0002-0231-9897
>>>
>>> RSE Team: https://rse.ncldata.dev
>>>
>>> Personal website: http://jannetta.com
>>>
>>>
>>>
>>>
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>>
>>> Book time to meet with me
>>> <https://outlook.office.com/bookwithme/user/[email protected]?anonymous&ep=bwmEmailSignature>
>>>
>>>
>>> ------------------------------
>>>
>>>
>>>
>>> *From:* Toby Hodges via discuss <[email protected]>
>>> *Sent:* 19 March 2025 09:38
>>> *To:* discuss <[email protected]>
>>> *Subject:* Re: [cp-discuss] Feedback Request: Lesson Updates on
>>> Generative AI
>>>
>>>
>>>
>>> ⚠ External sender. Take care when opening links or attachments. Do not
>>> provide your login details.
>>>
>>> Thanks everyone. Responding to a couple of specific points/questions:
>>>
>>>
>>>
>>> @Lex wrote
>>>
>>>
>>>
>>> This point
>>>
>>>
>>>
>>> 1. For most problems you will encounter at this stage, help and answers
>>> can be found among the first results returned by searching the internet.
>>>
>>>
>>>
>>> is to me not very helpful as I think it takes longer time and can be
>>> more frustrating to sift through the search results, while asking a chatbot
>>> is just as helpful for a fraction of the effort.
>>>
>>>
>>>
>>> I intended for this point to be read in the context of the preceding
>>> paragraph about (some of) the ethical concerns with LLMs. The implication
>>> being that search results can be similarly helpful, at a considerably lower
>>> cost. I could make the implicit explicit, by writing something like
>>>
>>>
>>>
>>> “Although it might take you slightly longer to find them, the answers
>>> available online will have been provided directly and willingly by a human,
>>> at a fraction of the environmental cost of getting equivalent help from a
>>> genAI tool.”
>>>
>>>
>>>
>>> I chose to keep things vague, 1. for brevity, 2. for simplicity, and 3.
>>> because opinions vary considerably among the Instructor community on
>>> whether or not such concerns are a “dealbreaker” for the routine use of
>>> genAI. (See also the Instructor Note at the beginning of the section.)
>>>
>>>
>>>
>>> @Somebody (sorry I cannot see who!) wrote
>>>
>>>
>>>
>>>
>>>
>>> I wonder if more could be said about *how* to demonstrate the use of
>>> LLMs. All the bad things people do with LLMs (and Stack Overflow) are
>>> opportunities to demonstrate a better way.
>>>
>>>
>>>
>>> So we could show getting some code from an LLM, and then the steps of
>>> examining variables and understanding their types, inserting debugging
>>> "print" statements, looking up documentation, considering alternative
>>> solutions, and explaining our thought process.
>>>
>>>
>>>
>>> If we want to cover this, it needs to be in a separate lesson or as an
>>> almost total rewrite of existing materials IMO. Delving into this in detail
>>> would be too time consuming during a workshop otherwise, at the cost of all
>>> the other important things we want to teach people.
>>>
>>>
>>>
>>> I hope that next week’s community sessions (Tuesday 25 March, 12:00 and
>>> 21:00 UTC! Sign up on the etherpad!
>>> <https://pad.carpentries.org/community-sessions-2025>) will be an
>>> opportunity for some Instructors to describe and maybe demonstrate how they
>>> have been teaching exactly this.
>>>
>>>
>>>
>>> Thanks again,
>>>
>>>
>>>
>>> Toby
>>>
>>>
>>>
>>> On 19. Mar 2025, at 01:58, Allen Lee <[email protected]> wrote:
>>>
>>>
>>>
>>> Lots of great discussion here, and glad to see the community engagement
>>> around this important topic. I posted some comments on the GitHub PR but
>>> wanted to re-share these two links here as I think they are worth the time
>>> (sorry, the video is 2hrs 🫠!)
>>>
>>>
>>>
>>> https://garymarcus.substack.com/p/decoding-and-debunking-hard-forks
>>>
>>>
>>>
>>> https://www.youtube.com/watch?v=EWvNQjAaOHw
>>>
>>>
>>>
>>> Cheers,
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> *Allen Lee*
>>>
>>> Senior Global Futures Scientist
>>>
>>> School of Complex Adaptive Systems
>>>
>>> *Arizona State University*
>>>
>>> Mail Code: 2701
>>>
>>> Tempe, AZ 85287-2701
>>>
>>> *p: *480-727-4646
>>>
>>> *email: *[email protected]
>>>
>>> *git: *https://github.com/alee
>>>
>>> *orcid: *https://orcid.org/0000-0002-6523-6079
>>>
>>> Center for Behavior, Institutions, and the Environment
>>> <https://complexity.asu.edu/cbie>
>>>
>>> Network for Computational Modeling in the Social and Ecological Sciences
>>> <https://www.comses.net/>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Mar 18, 2025 at 5:40 PM Paola Di Maio <[email protected]>
>>> wrote:
>>>
>>> I do teach prompt engineering, and would encourage a Carpentries
>>> module/course/lesson
>>>
>>> The question really is learning how to use AI intelligently and
>>> critically
>>>
>>>
>>>
>>> Materials on how to use AI for learning to code are already plenty but
>>>
>>> how to work with specific platforms and packages may need
>>> refining/require a more specialised LLM
>>>
>>> <image.png>
>>>
>>> In theory, the LLM  can learn from your interaction, so plenty of scope
>>> for Carpentries Instructors to teach the LLM how to code as well :-)
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Mar 19, 2025 at 8:26 AM Reed A. Cartwright <[email protected]>
>>> wrote:
>>>
>>> My experience with using AI for coding is that if you are not asking it
>>> questions from an algorithms class or similar, you get a lot of
>>> hallucinations that do not work, e.g. R packages that don't exist. You can
>>> ask it follow up questions and it will eventually fix the issues, but that
>>> requires having a firm mental model and the ability to read code and know
>>> how it would work in practice.
>>>
>>> I can see the utility of a Prompt Engineering Carpentries lesson, but I
>>> have no idea how to properly teach prompt engineering.
>>>
>>>
>>>
>>> --
>>>
>>> Reed A. Cartwright, PhD
>>>
>>> Associate Professor of Genomics, Evolution, and Bioinformatics
>>>
>>> School of Life Sciences and The Biodesign Institute
>>>
>>> Arizona State University
>>>
>>>
>>> Address: The Biodesign Institute, PO Box 876401, Tempe, AZ 85287-6401 USA
>>>
>>> Packages: The Biodesign Institute, 1001 S. McAllister Ave, Tempe, AZ
>>> 85287-6401 USA
>>>
>>> Office: Biodesign B-220C, 1-480-965-9949
>>>
>>> Website: *MailScanner has detected a possible fraud attempt from
>>> "urldefense.com" claiming to be* *MailScanner has detected a possible
>>> fraud attempt from "urldefense.com" claiming to be* http://cartwrig.ht/
>>> <https://urldefense.com/v3/__http://cartwrig.ht/__;!!IKRxdwAv5BmarQ!eK7h6FQJomgaZ0TF4wrpnUvBl0tbFH_vtWOXP7s_vKvM_jV2jEQj951z1w0YCG4WV87pX9F3uxj5he-kAHagHY3Omg$>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Mar 18, 2025 at 5:04 PM Paul Harrison via discuss <
>>> [email protected]> wrote:
>>>
>>>
>>>
>>>
>>>
>>> I wonder if more could be said about *how* to demonstrate the use of
>>> LLMs. All the bad things people do with LLMs (and Stack Overflow) are
>>> opportunities to demonstrate a better way.
>>>
>>>
>>>
>>> So we could show getting some code from an LLM, and then the steps of
>>> examining variables and understanding their types, inserting debugging
>>> "print" statements, looking up documentation, considering alternative
>>> solutions, and explaining our thought process.
>>>
>>>
>>>
>>> It's not so different from the skills needed to read other people's code.
>>>
>>>
>>>
>>> Since LLM output is random it's hard to script this fully, but that also
>>> seems in keeping with the Carpentries workshop format.
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>> Dr Toby Hodges (he/him)
>>>
>>> Director of Curriculum
>>>
>>> The Carpentries | https://carpentries.org
>>>
>>>
>>>
>>> Schedule a meeting with me: https://calendly.com/tobyhodges
>>>
>>>
>>>
>>> This list is for the purpose of general discussion about The Carpentries
> including community activities, upcoming events, and announcements. Some
> other lists you may also be interested in include discuss-hpc, discuss-r,
> and our local groups. Visit https://carpentries.topicbox.com/groups/ to
> learn more. All activity on this and other Carpentries spaces should abide
> by The Carpentries Code of Conduct found here:
> https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html
> *The Carpentries <https://carpentries.topicbox.com/latest>* / discuss /
> see discussions <https://carpentries.topicbox.com/groups/discuss> +
> participants <https://carpentries.topicbox.com/groups/discuss/members> +
> delivery options
> <https://carpentries.topicbox.com/groups/discuss/subscription> Permalink
> <https://carpentries.topicbox.com/groups/discuss/T93784e3bf5482411-M2de507659f438544819937c1>
>

------------------------------------------
This list is for the purpose of general discussion about The Carpentries 
including community activities, upcoming events, and announcements.  Some other 
lists you may also be interested in include discuss-hpc, discuss-r, and  our 
local groups. Visit https://carpentries.topicbox.com/groups/ to learn more. All 
activity on this and other Carpentries spaces should abide by The Carpentries 
Code of Conduct found here: 
https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html

The Carpentries: discuss
Permalink: 
https://carpentries.topicbox.com/groups/discuss/T93784e3bf5482411-M7831a706709fb9f0851f9898
Delivery options: https://carpentries.topicbox.com/groups/discuss/subscription

Reply via email to