ibl.ai

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

Listen on:

  • Apple Podcasts
  • YouTube
  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio

Episodes

Friday Nov 01, 2024

Summary of https://arxiv.org/pdf/2410.05229
This research paper investigates the mathematical reasoning capabilities of large language models (LLMs) and finds that their performance is not as robust as previously thought.
The authors introduce a new benchmark called GSM-Symbolic, which generates variations of math problems to assess the models' ability to generalize and handle changes in question structure.
The results show that LLMs struggle to perform true logical reasoning, often exhibiting a high degree of sensitivity to minor changes in input.
The authors also find that LLMs often blindly follow irrelevant information in the questions, suggesting that their reasoning process is more like pattern matching than true conceptual understanding.

Thursday Oct 31, 2024

Summary of https://arxiv.org/pdf/2410.03703
This research paper investigates the impact of large language models (LLMs) on human creativity, specifically focusing on divergent and convergent thinking with over 1,100 participants.
The results showed that while LLMs can provide short-term boosts in performance, they may ultimately hinder independent creative abilities.
The paper concludes that the long-term effects of using LLMs for creativity need to be carefully considered, and AI systems should be designed to enhance, rather than diminish, human cognitive skills.

Thursday Oct 31, 2024

Summary of https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf
This document details the development and release of Granite 3.0, a new family of open-source, lightweight foundation language models from IBM. The paper provides a thorough overview of the models' design, including their architecture, training data, and post-training techniques.
It also explores the models' performance across various benchmarks, focusing on their capabilities in general knowledge, instruction following, function calling, retrieval augmented generation, and cybersecurity.
The paper concludes by discussing the socio-technical harms and risks associated with LLMs and outlines IBM's efforts to mitigate these concerns through responsible AI practices.

Wednesday Oct 30, 2024

Summary of https://figshare.com/articles/preprint/Generative_AI_for_teaching_and_learning_pilot_project_2024-5_White_paper/27325515/1?file=50061531
This document proposes a project at George Washington University led by Professor Lorena Barba to implement a course-level AI mentor platform, which would allow instructors to create customized AI assistants for their students.
The ibl.ai platform aims to address the growing use of AI among students and to harness the technology's pedagogical potential. The document outlines the platform's features, benefits, deployment plan, research component, and potential vendor options.
It also highlights the importance of building AI literacy among faculty and students.

Wednesday Oct 30, 2024

Summary of https://tech.ed.gov/files/2024/07/Designing-for-Education-with-Artificial-Intelligence-An-Essential-Guide-for-Developers.pdf
This guide, produced by the U.S. Department of Education, provides recommendations for developers of Artificial Intelligence (AI)-enabled educational technology (edtech) products.
The guide emphasizes building trust within the edtech ecosystem through five core recommendations: designing for education, providing evidence for rationale and impact, advancing equity and protecting civil rights, ensuring safety and security, and promoting transparency and earning trust.
The guide emphasizes shared responsibility between developers and educators to create and deploy AI in education in a safe, secure, ethical, and equitable manner, while aligning with existing federal regulations and best practices.  

Monday Oct 28, 2024

Summary of https://ai.wharton.upenn.edu/wp-content/uploads/2024/10/AI-Report_Full-Report.pdf
"Growing Up: Navigating Gen AI's Early Years", is a report from AI at Wharton and GBK Collective that analyzes the current state of Generative AI (Gen AI) adoption in enterprises.
The report explores how businesses are using Gen AI, the challenges they face, and the impact it has on their operations, employee roles, and investment strategies.
The report highlights key takeaways regarding the growing use of Gen AI across various functions, the increasing but slowing investment in the technology, and the evolving attitudes and perceptions of business leaders toward Gen AI.
It also provides insights into the leading Gen AI vendors and the most commonly used tools and platforms.

Friday Oct 18, 2024

Summary of https://www.mckinsey.de/~/media/mckinsey/locations/europe%20and%20middle%20east/deutschland/news/presse/2024/2024%20-%2005%20-%2023%20mgi%20genai%20future%20of%20work/mgi%20report_a-new-future-of-work-the-race-to-deploy-ai.pdf
This report, written by the McKinsey Global Institute, examines the impact of artificial intelligence and automation on labor markets in Europe and the United States through 2030.
The report forecasts that certain jobs, particularly those requiring repetitive tasks, will be automated, resulting in a demand for workers with higher cognitive, technological, and social skills.
The report discusses potential job displacement and the need for workforce training programs to help workers transition to jobs in high demand.
The report also explores the impact of automation on productivity and the economic choices that companies and policymakers must make to manage the transition.

Friday Oct 18, 2024

Summary of https://arxiv.org/pdf/2402.05000
This research explores the use of Large Language Models (LLMs) as educational tools. The authors highlight the need to "pedagogically align" LLMs, meaning training them to provide structured, scaffolded guidance instead of direct answers.
The study proposes a novel approach using Learning from Human Preferences (LHP) algorithms, which leverage preference datasets to guide LLMs towards desired teaching behaviors. The research addresses the challenge of data scarcity by introducing a synthetic data generation technique using the CLASS framework.
Experiments with Llama, Mistral, and Zephyr models show that LHP methods significantly outperform standard supervised fine-tuning (SFT) in achieving pedagogical alignment.
The authors also introduce novel perplexity-based metrics to quantitatively measure the pedagogical alignment of LLMs.
 

Monday Oct 14, 2024

Summary of https://www.sequoiacap.com/article/generative-ais-act-o1/
The source, an essay from Sequoia Capital, analyzes the current state of generative AI, arguing that the field is transitioning from a focus on fast, pattern-based responses to a new era of "agentic reasoning" where AI systems can reason and solve problems at inference time.
The essay highlights OpenAI's new model, o1 (formerly known as Strawberry), which leverages inference-time compute to enable reasoning capabilities similar to AlphaGo, a groundbreaking AI system that defeated a Go master in 2016.
The essay also discusses the implications of this shift for the AI market, arguing that the focus will move towards developing "cognitive architectures" that enable AI agents to solve real-world problems in various domains.
The authors believe that this shift could create a new wave of "agentic applications" that provide services rather than software, potentially disrupting existing software companies.

Monday Oct 14, 2024

Summary of https://darioamodei.com/machines-of-loving-grace
The source is an essay written by Dario Amodei, CEO of Anthropic, a company focusing on AI safety. Amodei argues that while AI poses substantial risks, it also has the potential to transform the world for the better.
He explores five key areas where AI could revolutionize human life: biology and health, neuroscience and mental health, economic development and poverty, peace and governance, and work and meaning. He outlines how powerful AI could compress decades of progress in biology and neuroscience into a few years, leading to cures for diseases and improvements in human capabilities.
Amodei acknowledges that AI's effects on economic development and global equality are more uncertain, but he expresses optimism that AI can help bridge the gap between developed and developing nations. He also discusses the importance of ensuring that AI's benefits are distributed fairly and that democratic values are upheld.
Finally, Amodei explores the complex question of work and meaning in a world where AI could surpass human capabilities, suggesting that humans may still find purpose and value through their relationships and pursuit of personal goals.

Copyright 2024 All rights reserved.

Podcast Powered By Podbean

Version: 20241125