Loading that link, Gemini says " This conversation was created with a Gem that has been deleted. Create a new Gem or start a new chat to continue. "
I'd like to see the transcript, it sounds interesting. Getting a good prompt is a major factor in getting good results. I had a conversation with ChatGPT about how to get effective results while minimizing hallucinations and not keep giving me positive feedback for every response of mine. During it, I asked the chatbot to recommend a prompt for that purpose. Here's what it suggested. It seems to be effective although it produces lengthy results: "Answer this as a skeptical expert would, minimizing speculation and avoiding any attempt to affirm or reassure me. If the answer requires assumptions, state them explicitly. If you don’t know or can’t verify something, say so clearly. I want a strictly factual and critical analysis, not an optimistic or polished summary.” I used to ask it to be concise, but then there were reports that this increases the hallucination rate so I don't any more. On Tuesday, June 10, 2025 at 10:23:48 AM UTC-4 Edward K. Ream wrote: > Here <https://gemini.google.com/gem/ba2fb3843c39/1f68d964dc8b42ea>is my > first real conversation with an AI. I am astounded by its capabilities. > > Note how gemini gently corrects some poorly-worded questions. It also > ignores without comment my saying "information" instead of the intended > "misinformation". I doubt any human could answer my questions as cogently > and completely. > > *An important trick* > > I used a hidden technique in this conversation: asking "What would it take > to accomplish an objective?" This technique comes from the > Hugo-Award-Winning SciFi novel Stand on Zanzibar > <https://en.wikipedia.org/wiki/Stand_on_Zanzibar>. > > I read this novel decades ago. For me, the pivotal moment comes when the > protagonist "unfreezes" Shalmanizar, an almost all-powerful supercomputer. > As I vaguely remember, the computer starts rejecting inputs about Beninia, > a (fictional?) region in Africa. The solution is starts with this dialog: > > QQQ > Evaluate this, then: Postulate that the data given you about Beninia are > true. > Cue: what would be necessary to reconcile them with everything else you > know? > QQQ > > After lengthy computation, Shalmanizar replies that it needs to accept the > possibility of an unknown factor influencing the Beninians' actions. > > The protagonist then instructs the computer to accept this unknown factor > as fact. "I tell you three times" :-) > > *Summary* > > You will see this (tactical? strategic?) trick in various places in my > dialog. The most important use of this trick is this question: > > What would it take to convince Pushmeet Kohli to use Gemini to improve > public policy? > > I then followed up the answer with additional questions, the first being: > > "So, would combating misinformation have (in your words), "Clear and > Measurable Impact on Complex Societal Challenges"? > > I like this approach. It lets the AI do the arguing for me. What do you > think? > > Edward > > P.S. I asked gemini to "polish" this letter. I despise the results. What > you are hearing in this email is *my* voice, not some way-too-suave > imitation. > > EKR > -- You received this message because you are subscribed to the Google Groups "leo-editor" group. To unsubscribe from this group and stop receiving emails from it, send an email to leo-editor+unsubscr...@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/leo-editor/285754b1-f73c-496e-b8e5-0657785f9132n%40googlegroups.com.