Send Link mailing list submissions to
[email protected]
To subscribe or unsubscribe via the World Wide Web, visit
https://mailman.anu.edu.au/mailman/listinfo/link
or, via email, send a message with subject or body 'help' to
[email protected]
You can reach the person managing the list at
[email protected]
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Link digest..."
Today's Topics:
1. Model Council - Perplexity AI's new service (Antony Barry)
2. Elon Musk is getting serious about orbital data centers
(Antony Barry)
3. Open-source AI tool beats giant LLMs in literature reviews ?
and gets citations right (Antony Barry)
4. AI is ?speed skating on ice.? (Stephen Loosley)
----------------------------------------------------------------------
Message: 1
Date: Tue, 17 Feb 2026 15:40:02 +1100
From: Antony Barry <[email protected]>
To: Link list <[email protected]>
Subject: [LINK] Model Council - Perplexity AI's new service
Message-ID: <[email protected]>
Content-Type: text/plain; charset=utf-8
(I got Perplexity to write this)
Perplexity?s **Model Council** is a multi?model ?routing and consensus? layer
that sends your query to three frontier AI models in parallel, then has a
fourth synthesizer model compare and merge their answers into a single,
higher?confidence response. For networking and research workflows, it
effectively automates the old ?open three tabs and cross?check? pattern: you
see where models agree or diverge, can inspect each response side?by?side in a
structured table, and get a consolidated answer that surfaces blind spots and
disagreements instead of hiding them.[1][2][3][4][5]
Sources
[1] What is Model Council? | Perplexity Help Center
https://www.perplexity.ai/help-center/en/articles/13641704-what-is-model-council
[2] Perplexity Model Council: Compare AI Answers Side?by?Side
https://www.gend.co/blog/perplexity-model-council
[3] Perplexity launches Model Council to compare answers ...
https://www.storyboard18.com/digital/perplexity-launches-model-council-to-compare-answers-across-multiple-ai-models-89246.htm
[4] Perplexity Model Council: Compare AI Answers Side?by?Side
https://www.gend.co/en-ca/blog/perplexity-model-council
[5] Model Council Boosts Accuracy with Multi-Model Comparison
https://www.linkedin.com/posts/perplexity-ai_introducing-model-council-in-perplexity-activity-7425210595890769921-qRdh/?_bhlid=e4d9018af7b06619ef85a7b64220cd7bc6d7e316
[6] Perplexity Introduces Model Council, a new research ...
https://www.linkedin.com/posts/syedmuneem-hussainy-9535b425_ai-perplexity-multimodelai-activity-7426108725188501504-XRS7
[7] What is Perplexity's Model Council and How to Use It?
https://itmatterss.in/industry/ai/perplexity-model-council-multi-model-ai-answers/
[8] Perplexity Model Council Creates A New Standard For ...
https://www.reddit.com/r/AISEOInsider/comments/1r1ah00/perplexity_model_council_creates_a_new_standard/
[9] Perplexity's New Model Council Feature Is Actually Pretty ...
https://www.reddit.com/r/aicuriosity/comments/1qwq1mw/perplexitys_new_model_council_feature_is_actually/
[10] Perplexity AI Model Council: The New Feature That Reveals How AI ...
https://www.reddit.com/r/AISEOInsider/comments/1r6286b/perplexity_ai_model_council_the_new_feature_that/
[11] Perplexity AI
https://www.facebook.com/perplexityofficial/posts/model-council-on-perplexity-helps-you-receive-more-accurate-reliable-answers-by-/1073940245793136/
[12] Model Council Boosts Accuracy with Multi- ...
https://www.linkedin.com/posts/perplexity-ai_introducing-model-council-in-perplexity-activity-7425210595890769921-qRdh
[13] NEW Perplexity Model Council is INSANE!
https://www.youtube.com/watch?v=UJEUTULRmCQ
[14] Ashwin Krishnan's Post
https://www.linkedin.com/posts/ashwinknan_noticed-an-email-from-perplexity-ai-this-activity-7426848652989497344-uvqw
[15] Introducing Model Council
https://www.facebook.com/61558213424986/posts/introducing-model-councilperplexity-has-launched-model-council-a-new-research-fe/122218773212273780/
Antony Barry
[email protected]
------------------------------
Message: 2
Date: Tue, 17 Feb 2026 14:23:37 +1100
From: Antony Barry <[email protected]>
To: Link list <[email protected]>
Subject: [LINK] Elon Musk is getting serious about orbital data
centers
Message-ID: <[email protected]>
Content-Type: text/plain; charset=utf-8
https://techcrunch.com/2026/02/05/elon-musk-is-getting-serious-about-orbital-data-centers/?lctg=1980929&utm_source=digitaltrends&utm_medium=email&utm_content=subscriber_id:1980929&utm_campaign=DTDaily20260206?
Elon Musk is getting serious about orbital data centers
techcrunch.com
Concise summary
Elon Musk has filed plans with the Federal Communications Commission (FCC) for
a million-satellite data center network, indicating a serious effort to
establish orbital data centers.
Musk argues that solar panels produce more power in space, making it cheaper to
operate data centers in orbit, and predicts that 2028 will be a tipping point
year for orbital data centers.
Musk forecasts that in five years, more AI will be launched and operated in
space than the cumulative total on Earth, with SpaceX and its newly merged AI
company, xAI, poised to benefit from this shift.
Antony Barry
[email protected]
------------------------------
Message: 3
Date: Tue, 17 Feb 2026 15:27:31 +1100
From: Antony Barry <[email protected]>
To: Link list <[email protected]>
Subject: [LINK] Open-source AI tool beats giant LLMs in literature
reviews ? and gets citations right
Message-ID: <[email protected]>
Content-Type: text/plain; charset=utf-8
Summary
## Introduction to OpenScholar and Its Design
- OpenScholar is a retrieval-augmented language model designed for scientific
research tasks, addressing challenges such as hallucinations, outdated
pre-training data, and limited attribution in traditional language models.
- The model integrates a domain-specialized data store (OpenScholar DataStore)
and a self-feedback-guided generation mechanism to improve factuality,
coverage, and citation accuracy.
- OpenScholar outperforms other models, including proprietary systems like
GPT-4o and PaperQA2, in evaluations on the ScholarQABench benchmark,
demonstrating its ability to produce high-quality, comprehensive, and
transparent scientific literature synthesis.
## Challenges with Traditional LLMs and Retrieval-Augmented Solutions
- The study found that large language models (LLMs) often hallucinate
information, especially in scientific areas with limited online presence, and
that this effect is amplified in areas undercovered on the open web.
- The analysis showed that retrieval-augmented models, such as OpenScholar,
outperform non-retrieval models in terms of coverage and citation accuracy, and
that human-written answers remain strong baselines for quality and relevance.
## Expert Evaluations and Model Performance
- Expert evaluations found that OpenScholar-GPT-4o and OpenScholar-8B models
can produce more comprehensive responses than humans, with higher coverage and
citation accuracy, and are rated as useful in 80% and 72% of queries,
respectively.
- OpenScholar is a retrieval-augmented language model that demonstrates strong
performance in scientific literature synthesis, outperforming existing systems
and human-generated answers in certain evaluations.
- The model's success is attributed to its ability to provide more
comprehensive and detailed answers, with coverage being a key factor in human
assessments of response quality.
## Technical Contributions and Limitations
- Despite its strengths, OpenScholar has limitations, including inconsistent
retrieval of relevant papers, potential factual inaccuracies, and reliance on
proprietary models, highlighting areas for future research and improvement.
- OpenScholar introduces several technical contributions, including the
construction of OSDS, a database of 45 million scientific papers with
precomputed dense embeddings.
## Retrieval Pipeline Architecture
- The OpenScholar retrieval pipeline integrates a trained retriever and
reranker to select the top N passages for the generator, ensuring broader
coverage and improved relevance.
- The inference pipeline uses iterative self-feedback inference with retrieval
and citation verification to improve factuality and evidence grounding, and
generates high-quality training data for specialized language models.
## Synthetic Data Generation and Benchmark Creation
- The authors generate synthetic data by prompting a language model (LM) to
create literature review questions based on 10,000 paper abstracts published
after 2017.
- They introduce a two-step data filtering process, pairwise filtering and
rubric filtering, to address issues such as hallucinations and repetitive
writing in the synthetic data.
- The authors create a benchmark called ScholarQABench to evaluate the
capabilities of LMs in automating scientific literature review, which includes
tasks with diverse input-output formats and spans four scientific disciplines.
## Data Collection and Evaluation Metrics
- The authors collected 2,759 expert-written literature review questions in
biomedicine and neuroscience, and 108 questions with expert-written answers in
computer science, biomedicine, and physics.
- They developed a multifaceted automatic evaluation pipeline, including
metrics such as correctness, citation accuracy, and content quality, to assess
the performance of language models.
## Related Research and Models
- The authors introduced ScholarQABench, a comprehensive benchmark with
automated metrics, and OpenScholar, a model that outperforms previous systems
and shows superiority over human experts in five domains.
- The section references various research papers and studies on language
models, retrieval-augmented generation, and scientific literature review.
- Researchers such as Lewis, Guu, Asai, and others have made significant
contributions to the development of retrieval-augmented language models and
their applications.
- The studies cover topics like hallucination detection, knowledge-intensive
NLP tasks, and the evaluation of language models, with datasets like S2ORC,
SciRIFF, and PubMedQA being utilized.
- The document references various studies on large language models, including
SciFive, BioBART, BioGPT, and others, which are used for biomedical literature
and text generation.
## Institutional Contributions and Model Variants
- Researchers have developed models such as OpenScholar, OpenScholar-8B, and
OpenScholar-GPT-4o, which utilize retrieval-augmented language models for
scientific discovery and hypothesis generation.
- The studies involve authors from multiple institutions, including the
University of Washington, Allen Institute for AI, and Stanford University, with
contributions from Akari Asai, Jacqueline He, Rulin Shao, and others.
## Evaluation Pipeline and Experimental Results
- The ScholarQABench evaluation pipeline assesses aspects such as correctness
and citation accuracy in retrieval-augmented language models.
- Experiments with OpenScholar and standard RAG models using Llama 3.1 8B and a
trained 8B model show the effect of context length on correctness and citation
F1 scores.
- OpenScholar-GPT-4o and OpenScholar-8B models are preferred over expert
answers due to higher coverage and depth, outperforming GPT-4o without
retrieval.
https://www.nature.com/articles/s41586-025-10072-4?utm_source=Live+Audience&utm_campaign=40a8f088de-nature-briefing-ai-robotics-20260210&utm_medium=email&utm_term=0_-b08e196e33-50902052?
Synthesizing scientific literature with retrieval-augmented language models
nature.com
Antony Barry
[email protected]
------------------------------
Message: 4
Date: Tue, 17 Feb 2026 22:37:52 +1030
From: Stephen Loosley <[email protected]>
To: "link" <[email protected]>
Subject: [LINK] AI is ?speed skating on ice.?
Message-ID: <[email protected]>
Content-Type: text/plain; charset="UTF-8"
Not sure, this item may have been linked before ..
Trillion-dollar AI market wipeout happened because investors banked that
?almost every tech company would come out a winner?
AI Disruption Fears Rattle IndustriesScroll back up to restore default view.
By Eleanor Pringle Mon, February 16, 2026 at 10:55 PM GMT+11 4 min read
https://finance.yahoo.com/news/trillion-dollar-ai-market-wipeout-115521847.html
Investors wobbled last week as they worked through the disruption AI is likely
to cause across global industries, with further hiccups potentially bubbling
through this week.
But the reckoning should have been expected, argued Deutsche Bank in a note to
clients this morning, because it is a readjustment of perhaps overly optimistic
expectations.
Software stocks in particular suffered a wipeout amid mounting concerns that
large language models may replace current service offerings. Companies in the
legal, IT, consulting and logistics sectors were also impacted.
JP Morgan wrote last week that some $2 trillion had been wiped off software
market caps alone as a result, a reality that prior to a fortnight ago,
Deutsche?s Jim Reid argued had been purely academic.
A 13-figure sell-off is something Reid has speculated over for some time,
telling clients: ?For months, my published view has been that nobody truly
knows who the long term winners and losers of this extraordinary technology
will be.
Yet as recently as October, markets were implicitly pricing in a world where
almost every tech company would come out a winner.
?Over recent weeks we?ve seen a more realistic differentiation emerge within
tech?but that repricing is now rippling into the broader economy with
surprising speed.?
Reid hasn?t been alone in his suspicion that investors had perhaps been
painting over the entire stock market (and indeed wider economy) with the same,
optimistic brush.
Some speculators have made broad-stroke arguments that the efficiencies offered
by AI will result in wins for the vast majority of companies, while others have
argued that while AI is not in a bubble, there are pockets of overoptimism that
may burst.
JPMorgan?s CEO Jamie Dimon is of such an opinion, explaining at the Fortune
Most Powerful Women Summit last year: ?You should be using it,? (speaking to
any business that was listening). But he added a caveat, saying that back in
1996, ?the internet was real,? and ?you could look at the whole thing like it
was a bubble.?
Then he broke down the real difference that he sees?between AI, on the one
hand, and generative AI, on the other. It?s an important distinction, Dimon
said, while adding that ?some asset prices are high, in some form of bubble
territory.?
Indeed, Jeremy Siegel, Emeritus Professor of Finance at The Wharton School of
the University of Pennsylvania, argued that such shifts demonstrate investors
are ?asking the right questions.?
Writing for WisdomTree a week ago, where he serves as senior economist, Siegel
said: ?When companies talk about $200 billion in capital expenditures, markets
should scrutinize payback periods, competitive dynamics, and whether durable
moats can be built in an environment where technology is evolving at breakneck
speed. That tension explains why leadership will continue to rotate even as the
secular story remains intact.?
That said, Reid suggested that the market may be repricing overzealously,
arguing the disruption in ?old economy? sectors feels overdone: ?The real
challenge is that even by the end of this year, we still won?t have enough
evidence to identify the structural winners and losers with confidence.
That leaves plenty of room for investors? imaginations?both optimistic and
pessimistic?to run wild. As such big sentiment swings will continue to be the
order of the day.?
Thin ice
Disruptions provoked by investors? caution around AI sits at odds to other
market adjustments, argues Ed Yardeni, because it is a cycle which feeds itself.
Yardeni, the president of the well-regarded economic research shop after his
own name, wrote over the weekend that AI is ?speed skating on ice.?
While it is typical for technological revolutions to be disruptive, the top
economist argued, AI has the potential to unseat its own creators.
He argued AI has the ?ability to write software code, including AI code. So it
can feed on itself, with the new code eating the old, making it obsolete very
quickly.
The pace of obsolescence seems to be moving at warp speed for both AI hardware
and software, particularly the LLMs. That pace has recently spooked investors
who?ve been selling the stocks of any company that might be negatively
disrupted by AI.?
This story was originally featured on Fortune.com
------------------------------
Subject: Digest Footer
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link
------------------------------
End of Link Digest, Vol 399, Issue 15
*************************************