What Adam is Reading - Week of 2-24-25

Week of February 24, 2025

 

Thanks to the décor of the musical The Bedwetter (adapted from Sarah Silverman's autobiography about her life in 1980), I spent the weekend reflecting on my late 1970s childhood home filled with macrame wall art, earth tones, plastic ashtrays, and indoor ferns.  Despite these items' (now) objectively unappealing nature, they evoke a surprising feeling of comfort.  It is a good reminder of rosy retrospection, a cognitive bias that, coupled with the fallacy of declinism, amplifies the notion that things ought to be 'made great again.'  Either way, the play was fantastic (even though the writers missed the opportunity to rhyme the words enuresis and thesis), and it reminded me to pull out the pillow I saved from my parent's 1977 couch (see below).

 

The play:

https://www.npr.org/2025/02/21/nx-s1-5297877/sarah-silvermans-the-bedwetter-tells-a-very-personal-story-with-wide-relevance-now

The 1977 Weinstein family couch pillow:

https://drive.google.com/file/d/1w7hxxQlsCLnA4i3SNzd6pSj2WL6wLurr/view

and

https://en.wikipedia.org/wiki/Rosy_retrospection

and

https://en.wikipedia.org/wiki/Declinism

---

Listen to a Google Notebook LM AI-generated podcast of the newsletter with two virtual "hosts."

 

https://drive.google.com/file/d/19F712TGBW-VZg45P_zFkEZ4msh67Imys/view

 

About NotebookLM: https://blog.google/technology/ai/notebooklm-audio-overviews/

------

 

Science and Technology Trends

 

I have previously said this newsletter's science and AI sections would eventually merge.  This week they do.

 

A reasoning AI tool by Google developed a range of novel hypotheses that matched the work of scientists (working on bacterial antibiotic resistance) who had not previously published or shared their data in any public forum.  One of their hypotheses is that antibiotic-resistant bacteria can form and share a tail of different viruses, allowing resistance to spread between bacterial species (essentially bacteria weaponizing viruses).  The scientists stated their hypothesis was unique - they had not published their ideas anywhere.  The Google AI tool's top answer suggested superbugs may exchange viral tails precisely as researchers hypothesized.  Moreover, the AI tool provided a range of other ideas - many of which were plausible.  In essence, AI provides thought partnership and creative exploration.

https://www.bbc.com/news/articles/clyz6e9edy3o

Here is Google's AI co-scientist, a multi-agent system.  It is worth looking at the diagrams.

https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/

 

Duke researchers published a cross-sectional study using randomized assignment to compare patient comprehension of translated vs. untranslated hospital discharge summary notes (DSNs).  "Utilizing GPT-4 for plain language translations of hospital discharge summaries significantly improved comprehension outcomes across all diagnoses and patient populations, with even greater benefits observed in historically marginalized populations.  Although further research is needed to improve the clinical reliability and robustness of GPT-4, this study provides strong, objective evidence that GPT-4 translation improves patients' understanding of content reported in clinical notes [and is easily layered into modern health IT systems.]."

https://ai.nejm.org/doi/full/10.1056/AIoa2400402

and TLDR Claude summary:

https://claude.site/artifacts/63962de8-c51f-4cfb-93ca-d5ea1ac14875

 

 

Anti- Anti-Science Articles of Note

 

Beyond the logical fallacies and agenda-motivated misinterpretation of scientific data, there is a more subtle form of anti-science involving publishing false data.  As stated more technically (and without the connotation of intent), a small percentage of published articles contain non-reproducible results (for reasons ranging from incompetence to nefariousness).   It is challenging to detect since most journal readers cannot or will not redo the studies.

 

Canadian Professor Ulrich Schimmack created a set of statistical tools that looked at data variance patterns to predict the likelihood of any given study being reproducible (and even these methods are biased by the fact that published studies are typically those with positive findings - data from negative studies (with no or unfavorable findings) are often not shared in journals.  While Schimmack uses these tools to evaluate psychology studies, his logic is generalizable.

https://replicationindex.com/about/

 

Related: Last week, Nature published a study of universities associated with retracted studies.  These journal articles are published only to be later disclaimed and removed from journal archives due to identified issues or concerns.  Sometimes, the data is wholly falsified (for example, a young scientist publishes a purchased, made-up article for career advancement).   Researchers will complete the work in other cases but alter data for more compelling results.  Either way, this study looks at universities whose faculty and staff have the highest retraction rates across numerous journals.

X post: https://x.com/nature/status/1892164778019156443

and

Summary of Article:

https://claude.site/artifacts/cab288e4-75e8-4ef0-ac0a-9cceea57763c

and

Article:

https://www.smry.ai/proxy?url=https%3A%2F%2Fwww.nature.com%2Farticles%2Fd41586-025-00455-y%3FlinkId%3D13024205

 

So, how does a typical reader deal with false data you may not realize is false?  Some thoughts:

  • No one study is typically compelling enough to start or stop therapy.
  • Paraphrasing Carl Sagan, extraordinary claims require extraordinary data and scientific rigor.
  • Meta-analyses (or reviewing multiple studies) help understand data trends across numerous studies and time.
  • Ultimately, most of "what we know" about science is founded on concepts of "best available evidence," not absolute "Truth" (with a capital T).
  • Most humans do not have the time, capacity, or inclination to read volumes of studies.  Thus, it is easy for those with agendas to sew doubt with provocative questions, quoting contrary data (of varying quality) and exploiting the gaps in scientific "certainty."
  • Combining doubt in scientists (often through ad hominem attacks) with people's natural tendency toward a loss aversion and present-bias fallacies is the modus operandi of the agenda-driven anti-science efforts.  (For example, I think Joe Rogan and his guests offer master classes in this.  Rogan asks the question (injecting doubt in the certainty/uncertainty gap), the guests provide contrary data and ad hominem attacks, and the audience has their loss aversion and present bias amplified or reinforced.)

 

Related: this is what a reasonable data summary from multiple trials looks like for kidney doctors - a post about the various data supporting the use of SGLT-2i drugs in patients with chronic kidney disease.  In this instance, the data strongly support their use, even though side effects are possible.

https://x.com/akronbichler/status/1893237029258862915?s=42

 

 

Living with AI.

 

I joke about my AI-generated emails answering other's AI-generated emails, but we may soon have the option of sending AI Agents to meetings on our behalf.  I found this article: Meeting Delegates: Benchmarking LLMs on Attending Meetings on Our Behalf.

"In this work we aim to develop a prototype of an LLM-powered meeting delegate system to address the above challenges, focusing initially on the first two while leaving the last two in the future work.  To assess its effectiveness across various LLMs, we conduct real-world testing in a few demo scenarios and construct an evaluation dataset from real meeting transcripts.  In contrast to recent studies that emphasize the facilitator role in meeting engagement Mao et al. (2024), our work concentrates on the participant role, which is more prevalent and distinct from that of the facilitator."

Paper: https://arxiv.org/html/2502.04376v1

Commentary: https://x.com/emollick/status/1891527817826828565

Here is an analog version of this idea in my head from the 1985 movie Real Genius: https://youtu.be/wB1X4o-MV6o?si=2BVDYhExWXkEUVt

 

Our older son is a student representative on his college's academic curriculum committee.  Recently, he and I discussed a hot topic in his meetings - the impact of AI on teaching, learning, and higher education.  Moreover, his school has a long tradition of an honor-code culture (many tests are take-home, and student integrity is a highly valued cultural norm).  In helping him think through the issues, I came across this excellent series of articles from Tufts University on the challenges of college teaching in the age of AI.   It is a relatively quick read and offers helpful thoughts on gauging learning when the entirety of human knowledge is accessible with an LLM.

Here is a ChatGPT summary of the series:

The "Addressing Academic Integrity in the Age of AI" series from Teaching@Tufts explores how educators can adapt teaching and assessment methods in response to the rise of advanced generative AI tools.  The series emphasizes the need to move beyond traditional detection methods and rethink assessment designs to maintain academic integrity.

Key Articles in the Series:

  • Part 1: Beyond AI Detection – Rethinking Academic Assessments
  • This article discusses the limitations of relying solely on AI detection tools, which can be inaccurate and biased.  It advocates for designing assessments emphasizing the learning process, critical thinking, and creativity, making it challenging for AI to authenticate student work.
  • Part 2: The AI Marble Layer Cake – Reconsidering In-Class and Out-of-Class Learning & Assessment
  • This piece introduces the concept of blending in-class and out-of-class activities to create a cohesive learning experience.  By integrating AI tools into both settings, educators can help students understand the appropriate use of AI while ensuring assessments accurately reflect individual understanding.
  • Part 3: Conversations about Cheating – Revisiting AI & Academic Integrity
  • This article emphasizes the importance of open dialogues with students about the ethical use of AI.  Clear communication of expectations and collaborative discussions on academic integrity can foster a culture of honesty and responsibility.
  • Part 4: Serving the AI Layer Cake in Your Classroom?  Educational Technology Can Help
  • This final installment explores how educational technologies can assist in integrating AI into the classroom.  It highlights tools and strategies that support the responsible use of AI, enhancing learning outcomes while maintaining academic standards.

Throughout the series, the overarching theme is the necessity for educators to adapt to the evolving technological landscape by redesigning assessments, fostering open communication, and leveraging educational technologies to uphold academic integrity in the age of AI.

https://sites.tufts.edu/teaching/2024/07/11/addressing-academic-integrity-in-the-age-of-ai/

 

Here is an unsettling overview of the various humanoid robots in development.  [Insert dystopian techno-horror vision of the future here.]

https://x.com/AiContentRebel/status/1893675488599486755

Related:

https://open.spotify.com/episode/3LNt85T7afcvV7XzZyQuME

 

 

Infographics

A loyal reader shared this fantastic visualization from Information is Beautiful- a meta-analysis of meta-analyses - graphing various dietary supplements by their popularity and supporting data quality.   While it is not perfect (there is a lot of nuance hiding under popularity and relative data quality), the visual is interactive and a great starting point to explore the various supplements, the conditions they "treat," and their proposed efficacy.

https://informationisbeautiful.net/visualizations/snake-oil-scientific-evidence-for-nutritional-supplements-vizsweet/

 

 

Things I learned this week

 

I often wonder if reincarnating as someone's dog might be a great way to experience the world.   Revisiting life as a cuttlefish may also have some advantages - they are adaptable, demonstrating a significant ability to think.

https://esajournals.onlinelibrary.wiley.com/doi/10.1002/ecy.70021

and

summary from Claude:

https://claude.site/artifacts/1cca8bb7-a0e9-462f-b2dd-bb875a258470

 

A loyal reader shared this excellent article about the history of cocaine-infused wine, a precursor to the original OG Coca-Cola and (unsurprisingly) very popular amongst many late 19th-century famous people.

https://www.thetakeout.com/1784867/vin-mariani-coca-wine/

 

 

AI art of the week.

(A visual mashup of topics from the newsletter, now using ChatGPT to summarize the newsletter, suggest prompts, and make the images).

 

"A surreal late-1970s living room with macrame wall art, indoor ferns, earth-tone furniture, and a retro couch with a patterned pillow.  However, instead of humans, humanoid robots are lounging in the room—one reading a vintage newspaper, another adjusting a record player, and a third sipping coffee from a plastic ashtray.  The warm nostalgic glow contrasts with the unsettling presence of futuristic androids, blending past and future in an uncanny yet cozy scene."

 

DALLE:

https://drive.google.com/file/d/1noYTwJ0KN9EqN0J_j8NlJmts4LuLHyF5/view

 

I tried Grok this week, too:

https://drive.google.com/file/d/1ek1XHIxW2QhgwTmnOMOhN60Agz0ZWgTE/view

 

---

COVID rates are steady at about 1 in 72 individuals in the U.S.   RSV, Norovirus, and Influenza wastewater data indicate falling rates (though still high).

The Pandemic Mitigation Collaborative (PMC) website uses wastewater levels to forecast 4-week predictions of COVID rates.

https://pmc19.com/data/

based upon https://biobot.io/data/

 

Wastewater Scan offers a multi-organism wastewater dashboard with an excellent visual display of individual treatment plant-level data.

https://data.wastewaterscan.org/

----

 

 

Clean hands and sharp minds,

 

Adam

Comments