Episodes

Wednesday Feb 05, 2025
Wednesday Feb 05, 2025
Summary of https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey
The Digital Education Council's 2025 Global AI Faculty Survey report analyzes faculty perspectives on AI integration in higher education. Key findings reveal widespread AI use, primarily for creating teaching materials, but also significant concerns about student AI evaluation skills and over-reliance.
Faculty desire stronger institutional support, clearer guidelines, and improved AI literacy resources. The report highlights a positive outlook on AI's potential, while acknowledging challenges regarding assessment methods, workload, and ethical considerations.
Here are some interesting takeaways from the Digital Education Council Global AI Faculty Survey 2025 that may not be mainstream:
Faculty are using AI in teaching, but cautiously: While 61% of faculty have used AI in teaching, a large majority (88%) of those are using it minimally to moderately. This suggests that even among those who have adopted AI, there is a tendency to limit its integration into their teaching practices. This could be due to lack of clear guidelines and example use cases or deliberate choice by faculty.
Faculty sentiment on AI is divided, with a notable neutral stance: While 57% of faculty hold a positive view of AI in education, a significant 30% remain neutral. This indicates a substantial proportion of educators are either uncertain or have mixed feelings about AI's impact. This neutrality is an important factor to consider when implementing AI strategies at educational institutions.
Most faculty see AI as an opportunity, but a significant minority see it as a challenge: 65% of faculty view AI as an opportunity in education, while 35% see it as a challenge. This split in perception suggests that there are underlying concerns and that a significant portion of educators may need more support in understanding AI's benefits and how to address potential downsides.
Faculty anticipate significant changes to their roles, but are unclear on the specifics: The majority of faculty (64%) believe that AI will bring significant to transformative changes to their roles as instructors. However, a considerable portion (16%) are not fully aware of the possible changes, which suggests a need for institutions to rectify this information gap through AI literacy and skills training.
Most faculty are at early stages of AI proficiency: A large percentage of faculty (40%) are at the beginning or have no understanding of AI literacy and skills. Only a small minority (4%) consider themselves experts. This highlights that considerable development of AI literacy and skills is required among the faculty.
There is a high level of concern over students’ ability to evaluate AI: A significant majority of faculty (83%) are concerned about students’ ability to critically evaluate AI-generated output. This is a key area of concern for faculty, and it is connected to their view of the most important skills that educators need in the age of AI, which is facilitating critical thinking.
Faculty worry about student over-reliance on AI: A large majority (82%) of faculty are concerned about students becoming overly dependent on AI tools. This shows that faculty are not only worried about the quality of the work of students using AI, but also how that work may affect students' learning and their capacity for independent thinking.
Most faculty don't find institutional AI guidelines comprehensive or clear: A large majority of faculty (80%) do not find their institutional AI guidelines for teaching to be comprehensive, and 80% also feel there is a lack of clarity on how AI can be applied in teaching. This demonstrates a significant gap between institutional guidance and faculty needs and perceptions.
There is significant dissatisfaction with AI literacy resources: A large percentage (78%) of faculty do not believe that their institutions have provided sufficient resources to develop faculty AI literacy. This indicates a critical need for institutions to invest in adequate training and resources to help faculty adapt to the changing educational landscape.
Faculty are keen to explore AI but cautious about its use in grading: Faculty are mostly happy to explore AI in course design, creating assignments, and developing teaching materials, but they are more cautious about using it for grading and providing feedback to students.
These insights suggest that while faculty are engaging with AI, there are still many challenges and concerns that institutions need to address to facilitate effective and ethical AI integration in higher education.
This survey complements the DEC's 2024 Global AI Student Survey to provide a holistic understanding of AI's impact on higher education.

Friday Jan 31, 2025
Friday Jan 31, 2025
Summary of https://edc.nyc/sites/default/files/2025-01/NYCEDC-NYC-AI-Advantage-2025-Report.pdf
This report from the NYC Economic Development Corporation (NYCEDC) examines New York City's burgeoning artificial intelligence (AI) ecosystem. It highlights NYC's strengths, including a robust talent pool, substantial venture capital investment, and a thriving startup scene, positioning the city as a global leader in applied AI.
The report also addresses challenges such as responsible AI development, workforce transitions, and regulatory considerations. Furthermore, it proposes initiatives to foster AI innovation, support businesses, and develop a skilled workforce, ensuring equitable access to AI's opportunities.
Finally, the report includes an extensive appendix defining key terms and detailing NYC's AI training providers and relevant nonprofits.

Monday Jan 27, 2025
Monday Jan 27, 2025
Summary of https://arxiv.org/pdf/2501.09223
Detail foundational concepts and advanced techniques in large language model (LLM) development. It covers pre-training methods, including masked language modeling and discriminative training, and explores generative model architectures like Transformers.
The text also examines scaling LLMs for size and context length, along with alignment strategies such as reinforcement learning from human feedback (RLHF) and instruction fine-tuning.
Finally, it discusses prompting techniques, including chain-of-thought prompting and prompt optimization methods to improve LLM performance and alignment with human preferences.

Monday Jan 27, 2025
Monday Jan 27, 2025
Summary of https://www.researchgate.net/publication/373715148_Cognitive_Architectures_for_Language_Agents
This research paper proposes a framework called CoALA (Cognitive Architectures for Language Agents) for building more sophisticated language agents.
CoALA draws parallels between Large Language Models (LLMs) and production systems from symbolic AI, suggesting that control flow mechanisms used in cognitive architectures can be applied to LLMs to improve reasoning, grounding, learning, and decision-making.
The authors present CoALA as a blueprint for organizing existing methods and guiding future development of more capable language agents, highlighting key components like memory modules and various action types.
The paper examines several existing language agents through the lens of CoALA and proposes actionable directions for future research. Finally, the authors address some conceptual questions regarding the boundaries of agents and their environments.

Monday Jan 27, 2025
Monday Jan 27, 2025
Summary of https://lor2.gadoe.org/gadoe/file/b172b8fe-0ac8-46a7-bfdb-282df86b52ed/1/Leveraging%20AI%20in%20the%20K-12%20Setting.pdf
This January 2025 Georgia Department of Education document provides guidance on the ethical and effective use of artificial intelligence (AI) in K-12 schools. It emphasizes responsible AI implementation, including data privacy protection and adherence to federal regulations like FERPA and COPPA.
The document outlines procedures for adopting AI policies, vetting AI tools, and providing staff training. It stresses the importance of transparency, human oversight of AI-generated content, and avoiding high-stakes uses of AI.
The guide also offers best practices for classroom AI integration and addressing AI attribution in student work.

Monday Jan 27, 2025
Monday Jan 27, 2025
Summary of https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13544
Investigates the effects of using generative AI, specifically ChatGPT, on university students' learning. A randomized controlled trial compared students using ChatGPT to those using human expert support, writing analytics tools, or no support at all.
The study examined intrinsic motivation, self-regulated learning processes, and learning performance across different groups. Results showed ChatGPT improved essay scores but did not significantly enhance motivation or knowledge transfer, raising concerns about "metacognitive laziness"—over-reliance on AI hindering deeper learning.
The study concludes that AI should supplement, not replace, human interaction in education.

Monday Jan 27, 2025
Monday Jan 27, 2025
Summary of https://airisk.mit.edu
This research paper and its accompanying materials create the AI Risk Repository, a comprehensive resource for understanding and addressing risks from artificial intelligence.
The repository includes a database of over 3,000 real-world AI incidents, along with two taxonomies classifying AI risks: a causal taxonomy (by entity, intent, and timing) and a domain taxonomy (by seven broad domains and 23 subdomains).
Based on the AI Risk Repository, here are the top 10 AI risks, presented in bullet points, and categorized by their domain, with emphasis on their frequency in the source documents:
AI System Safety, Failures & Limitations: This domain is the most frequently discussed, and includes these top risks:
AI pursuing its own goals in conflict with human goals or values: This risk is mentioned in 46% of the documents.
Lack of capability or robustness: A frequently discussed risk, mentioned in 59% of the documents
Socioeconomic & Environmental Harms: This domain is also frequently discussed and includes:
Power centralization and unfair distribution of benefits, mentioned in 37% of the documents
Increased inequality and decline in employment quality, mentioned in 34% of the documents
Discrimination & Toxicity: A frequently discussed domain including:
Unfair discrimination and misrepresentation: This risk is mentioned in 63% of the documents.
Privacy & Security: This domain includes:
Compromise of privacy by obtaining, leaking, or correctly inferring sensitive information: This risk is mentioned in 61% of the documents.
Malicious Actors & Misuse: This domain includes:
Cyberattacks, weapon development or use, and mass harm: This risk is mentioned in 54% of the documents.
Misinformation: This domain includes:
False or misleading information: Mentioned in 39% of the documents.
Human-Computer Interaction: This domain includes:
Overreliance and unsafe use: This risk is mentioned in 24% of documents.
It is important to note that while these risks are frequently discussed in the source documents, other risks which are discussed less frequently, such as AI welfare and rights, and pollution of the information ecosystem and loss of consensus reality, may also be of significant importance.

Friday Jan 24, 2025
Friday Jan 24, 2025
Summary of https://iblnews.org/ai-will-generate-better-student-learning-outcomes-as-teaching-models-change-says-aacu
This report summarizes a survey conducted by the American Association of Colleges and Universities (AAC&U) and Elon University's Imagining the Digital Future Center on the impact of generative AI on higher education. The survey of 337 college leaders reveals widespread student use of AI tools, but a significant lack of faculty preparedness and concerns about academic integrity.
While many leaders anticipate positive impacts on learning and research, they also express worries about over-reliance, equity issues, and the need for ethical considerations in AI education.
The report highlights the need for institutional change, including policy updates, faculty development, and curriculum adjustments to effectively integrate AI into teaching and learning. Overall, a cautiously optimistic outlook prevails, with most leaders expecting positive impacts despite significant challenges.
Key findings and takeaways:
Only 2% of higher ed leaders think most of their faculty are ready to use AI in teaching!
65% feel 2024 grads weren't ready for AI-driven workplaces.
54% say faculty can't spot AI-generated content. 🕵️♀️, while 59% say cheating has increased!
Students are using AI much more than faculty.
Faculty resistance, not students, is seen as the biggest barrier to adopting AI.
Only 19% of schools have AI majors/minors. 🎓, while 69% have policies about AI use.
While 91% believe AI enhances learning, 92% worry it undermines deep learning!

Friday Jan 24, 2025
Friday Jan 24, 2025
Summary of https://connect.chronicle.com/from-data-to-discovery.html
Google outlines a roadmap for higher education institutions to leverage artificial intelligence. It emphasizes the crucial role of robust data management and a strong data culture, highlighting the challenges of "dark data" and siloed information.
The document showcases successful examples of data-driven initiatives in universities, emphasizing the need for data literacy training and improved data architectures to unlock AI's potential.
Finally, it promotes the use of partnerships and tools to achieve efficient and secure data integration, enabling institutions to make better strategic decisions and improve student outcomes.

Friday Jan 24, 2025
Friday Jan 24, 2025
Summary of https://cdn.sanity.io/files/tlr8oxjg/production/6516c1d534ae63c302e0487d08f8819a3e574253.pdf
Udacity's 2025 State of AI at Work report analyzes data from 850 respondents across 87 countries and 22 industries to examine the current and future use of AI in the workplace.
The report reveals a significant gap between the demand for AI skills and the availability of training, with only a third of workers receiving necessary resources from their employers.
Millennials express more positive attitudes towards AI's impact than Gen Z or Gen X, particularly regarding increased efficiency and revenue generation.
Popular AI tools include writing assistants and image generators, while the report highlights the need for increased AI training and data literacy to bridge the skills gap.





