ibl.ai

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

Listen on:

  • Apple Podcasts
  • YouTube
  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio

Episodes

Monday Jan 27, 2025

Summary of https://lor2.gadoe.org/gadoe/file/b172b8fe-0ac8-46a7-bfdb-282df86b52ed/1/Leveraging%20AI%20in%20the%20K-12%20Setting.pdf
This January 2025 Georgia Department of Education document provides guidance on the ethical and effective use of artificial intelligence (AI) in K-12 schools. It emphasizes responsible AI implementation, including data privacy protection and adherence to federal regulations like FERPA and COPPA.
The document outlines procedures for adopting AI policies, vetting AI tools, and providing staff training. It stresses the importance of transparency, human oversight of AI-generated content, and avoiding high-stakes uses of AI.
The guide also offers best practices for classroom AI integration and addressing AI attribution in student work.

Monday Jan 27, 2025

Summary of https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13544
Investigates the effects of using generative AI, specifically ChatGPT, on university students' learning. A randomized controlled trial compared students using ChatGPT to those using human expert support, writing analytics tools, or no support at all.
The study examined intrinsic motivation, self-regulated learning processes, and learning performance across different groups. Results showed ChatGPT improved essay scores but did not significantly enhance motivation or knowledge transfer, raising concerns about "metacognitive laziness"—over-reliance on AI hindering deeper learning.
The study concludes that AI should supplement, not replace, human interaction in education.

Monday Jan 27, 2025

Summary of https://airisk.mit.edu
This research paper and its accompanying materials create the AI Risk Repository, a comprehensive resource for understanding and addressing risks from artificial intelligence.
The repository includes a database of over 3,000 real-world AI incidents, along with two taxonomies classifying AI risks: a causal taxonomy (by entity, intent, and timing) and a domain taxonomy (by seven broad domains and 23 subdomains).
Based on the AI Risk Repository, here are the top 10 AI risks, presented in bullet points, and categorized by their domain, with emphasis on their frequency in the source documents:
AI System Safety, Failures & Limitations: This domain is the most frequently discussed, and includes these top risks:
AI pursuing its own goals in conflict with human goals or values: This risk is mentioned in 46% of the documents.
Lack of capability or robustness: A frequently discussed risk, mentioned in 59% of the documents
Socioeconomic & Environmental Harms: This domain is also frequently discussed and includes:
Power centralization and unfair distribution of benefits, mentioned in 37% of the documents
Increased inequality and decline in employment quality, mentioned in 34% of the documents
Discrimination & Toxicity: A frequently discussed domain including:
Unfair discrimination and misrepresentation: This risk is mentioned in 63% of the documents.
Privacy & Security: This domain includes:
Compromise of privacy by obtaining, leaking, or correctly inferring sensitive information: This risk is mentioned in 61% of the documents.
Malicious Actors & Misuse: This domain includes:
Cyberattacks, weapon development or use, and mass harm: This risk is mentioned in 54% of the documents.
Misinformation: This domain includes:
False or misleading information: Mentioned in 39% of the documents.
Human-Computer Interaction: This domain includes:
Overreliance and unsafe use: This risk is mentioned in 24% of documents.
It is important to note that while these risks are frequently discussed in the source documents, other risks which are discussed less frequently, such as AI welfare and rights, and pollution of the information ecosystem and loss of consensus reality, may also be of significant importance.

Friday Jan 24, 2025

Summary of https://iblnews.org/ai-will-generate-better-student-learning-outcomes-as-teaching-models-change-says-aacu
This report summarizes a survey conducted by the American Association of Colleges and Universities (AAC&U) and Elon University's Imagining the Digital Future Center on the impact of generative AI on higher education. The survey of 337 college leaders reveals widespread student use of AI tools, but a significant lack of faculty preparedness and concerns about academic integrity.
While many leaders anticipate positive impacts on learning and research, they also express worries about over-reliance, equity issues, and the need for ethical considerations in AI education.
The report highlights the need for institutional change, including policy updates, faculty development, and curriculum adjustments to effectively integrate AI into teaching and learning. Overall, a cautiously optimistic outlook prevails, with most leaders expecting positive impacts despite significant challenges.
Key findings and takeaways:
Only 2% of higher ed leaders think most of their faculty are ready to use AI in teaching!
65% feel 2024 grads weren't ready for AI-driven workplaces.
54% say faculty can't spot AI-generated content. 🕵️‍♀️, while 59% say cheating has increased!
Students are using AI much more than faculty.
Faculty resistance, not students, is seen as the biggest barrier to adopting AI.
Only 19% of schools have AI majors/minors. 🎓, while 69% have policies about AI use.
While 91% believe AI enhances learning, 92% worry it undermines deep learning!

Friday Jan 24, 2025

Summary of https://connect.chronicle.com/from-data-to-discovery.html
Google outlines a roadmap for higher education institutions to leverage artificial intelligence. It emphasizes the crucial role of robust data management and a strong data culture, highlighting the challenges of "dark data" and siloed information.
The document showcases successful examples of data-driven initiatives in universities, emphasizing the need for data literacy training and improved data architectures to unlock AI's potential.
Finally, it promotes the use of partnerships and tools to achieve efficient and secure data integration, enabling institutions to make better strategic decisions and improve student outcomes.

Friday Jan 24, 2025

Summary of https://cdn.sanity.io/files/tlr8oxjg/production/6516c1d534ae63c302e0487d08f8819a3e574253.pdf
Udacity's 2025 State of AI at Work report analyzes data from 850 respondents across 87 countries and 22 industries to examine the current and future use of AI in the workplace.
The report reveals a significant gap between the demand for AI skills and the availability of training, with only a third of workers receiving necessary resources from their employers.
Millennials express more positive attitudes towards AI's impact than Gen Z or Gen X, particularly regarding increased efficiency and revenue generation.
Popular AI tools include writing assistants and image generators, while the report highlights the need for increased AI training and data literacy to bridge the skills gap.

Thursday Jan 16, 2025

Summary of https://connect.chronicle.com/chatbots-and-the-campus-of-tomorrow.html
The text discusses the increasing integration of artificial intelligence (AI), specifically generative AI, in higher education. Colleges and universities are exploring AI to address challenges such as declining enrollment and budget constraints, while simultaneously capitalizing on opportunities for personalized learning, improved research, and enhanced operational efficiency.
Examples are provided of various institutions and EdTech companies leveraging AI for tasks ranging from personalized tutoring and student support to streamlining administrative processes and improving student success.
The overall focus is on how AI can transform higher education to meet the evolving needs of students and the workforce.

Tuesday Jan 14, 2025

Summary of https://tech.ed.gov/AI-postsecondary
This U.S. Department of Education document offers guidance on responsibly integrating artificial intelligence (AI) into postsecondary education.
AI's Transformative Impact: The document emphasizes that AI is significantly transforming postsecondary education, impacting areas such as admissions, enrollment, academic advising, and learning environments. It also highlights the dual role of higher education to leverage AI to improve access and success for all students, while preparing students for the AI-driven job market.
Key Recommendations for AI Integration: The brief outlines five key recommendations for postsecondary institutions:
Establish transparent policies for AI use.
Create infrastructure to support AI in instruction, advising, and assessment.
Test and evaluate AI tools rigorously.
Seek collaborative partnerships for AI design.
Review and supplement programs in light of AI's impact on future jobs. These recommendations are designed to be inclusive and adaptive for institutions with varying levels of resources and expertise.
Ethical Considerations and Transparency: The document stresses the importance of ethical AI practices, including ensuring equity, fairness, and non-discrimination. It uses examples of "stealth assessment" and "continuous monitoring" to demonstrate how a lack of transparency can erode trust and undermine institutional values. The document highlights the need for clear disclosure of data use and affirmative consent. It also mentions the potential for algorithmic discrimination and the need to mitigate this through rigorous testing and evaluation of AI systems.
AI Literacy: The document emphasizes the importance of developing AI literacy for students, faculty, and staff to ensure safe and effective use of AI. AI literacy includes understanding, using, and critically evaluating AI systems, as well as addressing how AI can facilitate discrimination and harassment. It notes that non-traditional students may face particular challenges in developing AI literacy skills. The document also states that faculty should be given the time to collaborate with their peers to learn how to implement AI models in their teaching and research.
Collaborative Partnerships: The document recommends forging partnerships with industry, non-profit organizations, and other postsecondary institutions on AI design and testing. It notes that collaborative partnerships can bring together educators' expertise in pedagogy, researchers’ expertise in measurement and evaluation, and technology companies’ technical expertise.
AI in Learning and Instruction: The document explores the use of AI in enhancing learning and instruction. It details the use of AI-driven adaptive learning environments to improve learning outcomes. It also points to the use of AI in providing just-in-time individualized help for students and for automating routine tasks for instructors. The document notes the capabilities of AI-enabled tools such as essay scoring systems and Automatic Short Answer Grading (ASAG). It also examines the use of AI to provide feedback to instructors on their practices. The use of AI to support students with disabilities, and the use of virtual and augmented reality for students with disabilities, are also explored. The document also discusses the transformative impact of AI on scientific research.
AI for Institutional Operations: The document details the use of AI to improve institutional operations including recruiting, admissions, retention, and enrollment services. It also examines how AI-enhanced student support can lead to improved learning outcomes. It addresses the use of AI-driven tools to support self-regulated learning, provide support for English language learners and students with disabilities, and support students' mental health.
The Need for Continuous Evaluation: The brief emphasizes that AI systems should be evaluated through iterative cycles of testing, feedback, and improvement, in order to build high-quality evidence on the abilities of AI platforms to support student services. It stresses the importance of determining what works, for whom, and under what conditions when implementing AI-driven tools.
Federal Guidance and Resources: This document is aligned with federal guidelines and guardrails. It highlights resources like the Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework as helpful tools for developing trustworthy AI systems. It also references resources such as the National AI Research Resource Pilot and the Center for Equitable AI and Machine Learning Systems.
These points illustrate the broad scope of the document in addressing the opportunities, challenges, and ethical considerations of integrating AI into postsecondary education. The document provides a comprehensive framework for educational leaders to navigate the complexities of AI implementation, while ensuring equitable and ethical use of these technologies.

Tuesday Jan 14, 2025

Summary of https://cloud.google.com/resources/ai-trends-report
This Google Cloud report, "AI Business Trends 2025," identifies five key trends shaping the AI landscape.
Multimodal AI: This trend focuses on the ability of AI to integrate diverse data sources such as images, video, and audio with text-based commands. This allows AI to learn from a broader range of contextual sources, producing more accurate and tailored outputs. The global multimodal AI market size is predicted to be $2.4B in 2025, growing to $98.9B by the end of 2037.AI agents: This trend represents the evolution of AI from simple chatbots to sophisticated multi-agent systems. These systems can manage complex workflows, automate business processes, and support human employees. 82% of executives at large companies plan to integrate AI agents within the next 3 years.Assistive search: AI-powered search is evolving from simple keyword searches to a more natural way of discovering information using images, audio, video, and conversational prompts. This shift is enabling users to access and interact with information more efficiently. The predicted size of the enterprise search market will be $12.9B by 2031.AI-powered customer experience (CX): AI is being used to provide seamless and personalized customer service and support. This is expected to be a top priority for new AI initiatives, with companies focusing on providing real-time conversational experiences. 71% of consumers expect companies to deliver personalized interactions.Security with AI: AI is being adopted into security and privacy best practices. It is used to bolster security defenses, identify and combat threats, and speed up responses. The average reduction in breach costs when organizations apply security AI and automation is $2.2 million. These trends are expected to transform how organizations operate, compete, and innovate in 2025.

Tuesday Jan 14, 2025

Summary of https://www2.deloitte.com/content/dam/Deloitte/us/Documents/gen-ai-multi-agents-pov-2.pdf
This white paper from Deloitte Consulting LLP advocates for the adoption of multiagent AI systems to revolutionize business processes.
It details the design principles for both individual AI agents and multiagent systems, emphasizing a human-in-the-loop approach and a robust reference architecture for scalability.
The paper uses examples from various industries to illustrate how these systems can automate complex workflows, improve efficiency, and foster innovation. A key takeaway is the importance of a systematic approach to implementation, including considerations for data management, talent acquisition, and ethical implications.
Finally, it offers a practical framework for organizations looking to leverage this technology.

Copyright 2024 All rights reserved.

Podcast Powered By Podbean

Version: 20241125