Yup, you nailed it! That's the idea :) it just saves time from worrying
about building a machine learning model, because we all have access to a
big one in the form of a large language model (Google/OpenAI are paying the
electricity cost).

So this can help save time by removing the overhead and creating a
"deterministic" classifier in the form a python program that is likely more
concise, maintainable, and interpretable than machine learning models
trained for this task.



On Mon, Mar 11, 2024 at 7:03 AM Chary Chary <[email protected]> wrote:

> Hi,
>
> I briefly scanned through the article.
>
> So in the essence you give LLM a bunch some CSV example and then ask LLM
> to write a python code, which would categorize similar, based on the
> keywords.
>
> So, this is a kind of alternative to teaching ML model based on the
> previous transactions to be able to categorize new ones.
>
> Did it get the main idea correctly?
>
> On Monday, March 11, 2024 at 12:36:00 AM UTC+1 [email protected] wrote:
>
>> Made a quick prompt over the weekend:
>> https://gist.github.com/jaanli/1f735ce0ddec4aa4d1fccb4535f3843f
>>
>> Results are that my partner (someone non-technical, design background,
>> but familiar with prompt engineering) can use the prompts—the last thing I
>> would want is an inscrutable system that I manually built to import
>> transactions from our dozen institutions across multiple countries &
>> currencies, that they can't re-use or extend.
>>
>> Visual Studio Code and the Beancount extension are already a stretch for
>> them so having something that works with a single prompt at a time and copy
>> and pasting was my goal.
>>
>> Hope this helps someone else! Surprised that these tools are not easier
>> to use (and thank you for beancount, this wouldn't be possible otherwise :)
>>
>> Would be fun to extend this with DSPy (
>> https://github.com/stanfordnlp/dspy/blob/main/intro.ipynb) which could
>> likely help squeeze several different converters into a few signatures
>> (compressed prompts), and things like chain-of-thought prompting (iterative
>> runs of large language models) would further reduce the
>> extract-transform-load overhead that has kept me from trying beancount all
>> these years.
>>
>> Very best,
>> Jaan
>>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Beancount" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/beancount/aoZ7-H1tCX4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/beancount/eefe5ba4-c0cd-46cd-9fb8-8922b399c221n%40googlegroups.com
> <https://groups.google.com/d/msgid/beancount/eefe5ba4-c0cd-46cd-9fb8-8922b399c221n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Beancount" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/beancount/CAKLvh_QPbA7gioL1bG%2BunerEGZisMSN_7MvAuwPG%2BjV_kz17bQ%40mail.gmail.com.

Reply via email to