ibl.ai

ibl.ai is a generative AI education platform based in NYC. This podcast, curated by its CTO, Miguel Amigot, focuses on high-impact trends and reports about AI.

Listen on:

  • Apple Podcasts
  • YouTube
  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio

Episodes

Thursday Apr 03, 2025

Summary of https://journals.sagepub.com/doi/10.1177/20539517241299732
Explores the emerging field of artificial intelligence ethics auditing, examining its rapid growth and current state through interviews with 34 professionals. It finds that while AI ethics audits often mirror financial auditing processes, they currently lack robust stakeholder involvement, clear success metrics, and external reporting.
The study highlights a predominant technical focus on bias, privacy, and explainability, often driven by impending regulations like the EU AI Act. Auditors face challenges including regulatory ambiguity, resource constraints, and organizational complexity, yet they play a vital role in developing frameworks and interpreting standards within this evolving landscape.
AI ethics auditing is an emerging field that mirrors financial auditing in its process (planning, performing, and reporting) but currently lacks robust stakeholder involvement, measurement of success, and external reporting. These audits are often hyper-focused on technical AI ethics principles like bias, privacy, and explainability, potentially neglecting broader socio-technical considerations.
Regulatory requirements and reputational risk are the primary drivers for organizations to engage in AI ethics audits. The EU AI Act is frequently mentioned as a significant upcoming regulation influencing the field. While reputational concerns can be a motivator, a more sustainable approach involves recognizing the intrinsic value of ethical AI for performance and user trust.
Conducting AI ethics audits is fraught with challenges, including ambiguity in interpreting preliminary and piecemeal regulations, a lack of established best practices, organizational complexity, resource constraints, insufficient technical and data infrastructure, and difficulties in interdisciplinary coordination. Many organizations are not yet adequately prepared to undergo effective AI audits due to a lack of AI governance frameworks.
The AI ethics auditing ecosystem is still in development, characterized by ambiguity between auditing and consulting activities, and a lack of standardized measures for quality and accredited procedures. Despite these limitations, AI ethics auditors play a crucial role as "ecosystem builders and translators" by developing frameworks, interpreting regulations, and curating practices for auditees, regulators, and other stakeholders.
Significant gaps exist in the AI ethics audit ecosystem regarding the measurement of audit success, effective and public reporting of findings, and broader stakeholder engagement beyond technical and risk professionals. There is a need for more emphasis on defining success metrics, increasing transparency through external reporting, and actively involving diverse stakeholders, including the public and vulnerable groups, in the auditing process.

Thursday Apr 03, 2025

Summary of https://www.nature.com/articles/s41599-024-04018-w
Investigates how the increasing use of artificial intelligence in organizations affects employee mental health, specifically job stress and burnout. The study of South Korean professionals revealed that AI adoption indirectly increases burnout by first elevating job stress.
Importantly, the research found that employees with higher self-efficacy in learning AI experience less job stress related to AI implementation. The findings underscore the need for organizations to manage job stress and foster AI learning confidence to support employee well-being during technological change. Ultimately, this work highlights the complex relationship between AI integration and its psychological impact on the workforce.
AI adoption in organizations does not directly lead to employee burnout. Instead, its impact is indirect, operating through the mediating role of job stress. AI adoption significantly increases job stress, which in turn increases burnout.
Self-efficacy in AI learning plays a crucial role in moderating the relationship between AI adoption and job stress. Employees with higher self-efficacy in their ability to learn AI experience a weaker positive relationship between AI adoption and job stress. This means that confidence in learning AI can buffer against the stress induced by AI adoption.
The findings emphasize the importance of a human-centric approach to AI adoption in the workplace. Organizations need to proactively address the potential negative impact of AI adoption on employee well-being by implementing strategies to manage job stress and foster self-efficacy in AI learning.
Investing in AI training and development programs is essential for enhancing employees' self-efficacy in AI learning. By boosting their confidence in understanding and utilizing AI technologies, organizations can mitigate the negative effects of AI adoption on employee stress and burnout.
This study contributes to the existing literature by providing empirical evidence for the indirect impact of AI adoption on burnout through job stress and the moderating role of self-efficacy in AI learning, utilizing the Job Demands-Resources (JD-R) model and Social Cognitive Theory (SCT) as theoretical frameworks. This enhances the understanding of the psychological mechanisms involved in the relationship between AI adoption and employee mental health.

Thursday Apr 03, 2025

Summary of https://www.eciia.eu/wp-content/uploads/2025/01/The-AI-Act-Road-to-Compliance-Final-1.pdf
"The AI Act: Road to Compliance," serves as a practical guide for internal auditors navigating the European Union's Artificial Intelligence Act, which entered into force in August 2024. It outlines the key aspects of the AI Act, including its risk-based approach that categorizes AI systems and imposes varying obligations based on risk levels, as well as the different roles of entities within the AI value chain, such as providers and deployers.
The guide details the implementation timeline of the Act and the corresponding obligations and requirements for organizations. Furthermore, it presents survey results from over 40 companies regarding their AI adoption, compliance preparations, and the internal audit function's understanding and auditing of AI. Ultimately, the document emphasizes the crucial role of internal auditors in ensuring their organizations achieve compliance and responsibly manage AI risks.
The EU AI Act is now in force (August 1, 2024) and employs a risk-based approach to regulate AI systems, categorizing them into unacceptable, high, limited, and minimal risk levels, with increasing obligations corresponding to higher risk. There's also a specific category for General Purpose AI (GPAI) models, with additional requirements for those deemed to have systemic risk.
Organizations involved with AI systems have different roles (provider, deployer, importer, distributor, authorised representative), each with distinct responsibilities and compliance requirements under the AI Act. The provider and deployer are the primary roles, with providers facing more extensive obligations.
Compliance with the AI Act has a phased implementation timeline with key dates starting from February 2025 (prohibited AI systems) through August 2027 (high-risk AI components in products). Organizations need to start preparing by creating AI inventories, classifying systems by risk, and establishing appropriate policies.
Internal auditors play a vital role in helping organizations achieve compliance with the AI Act by assessing AI risks, auditing AI processes and governance, and making recommendations. They need to ensure the implementation of AI Act requirements within their organizations.
A recent survey of over 40 companies revealed widespread AI adoption but a relatively low level of understanding of the AI Act within internal audit departments. Most internal audit departments are not yet leveraging AI, but when they do, it's mainly for risk assessment. Ensuring adequate AI auditing skills through training is highlighted as a need.

Thursday Apr 03, 2025

Summary of https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188231
This working paper details a field experiment examining the impact of generative AI on teamwork and expertise within Procter & Gamble. The study involved 776 professionals working on real product innovation challenges, randomly assigned to individual or team settings with or without AI assistance.
The research investigated how AI affects performance, expertise sharing across functional silos, and the social and emotional aspects of collaboration. Findings indicate that AI significantly enhances performance, allowing individuals with AI to match the output quality of traditional human teams. Moreover, AI facilitates the creation of more balanced solutions, regardless of professional background, and fosters more positive emotional responses among users.
Ultimately, the paper suggests that AI functions as a "cybernetic teammate," prompting organizations to reconsider team structures and the nature of collaborative work in the age of intelligent machines.
AI significantly enhances performance in knowledge work, with individuals using AI achieving a level of solution quality comparable to two-person teams without AI. This suggests that AI can effectively replicate certain benefits of human collaboration in terms of output quality.
AI breaks down functional silos and broadens expertise. Professionals using AI produced more balanced solutions that spanned both commercial and technical aspects, regardless of their professional background (R&D or Commercial). AI can also help individuals with less experience in product development achieve performance levels similar to teams with experienced members.
AI fosters positive emotional responses among users. Participants reported more positive emotions (excitement, energy, enthusiasm) and fewer negative emotions (anxiety, frustration) when working with AI compared to working alone without AI, matching or even exceeding the emotional benefits traditionally associated with human teamwork.
AI-augmented teams have a higher likelihood of generating exceptional, top-tier solutions. Teams working with AI were significantly more likely to produce solutions ranking in the top 10% of all submissions, indicating that the combination of human collaboration and AI can be particularly powerful for achieving breakthrough innovations.
AI is not merely a tool but functions as a "cybernetic teammate" that reshapes collaboration. It dynamically interacts with human problem-solvers, provides real-time feedback, bridges expertise boundaries, and influences emotional states, suggesting a fundamental shift in how knowledge work can be structured and carried out.

Thursday Mar 20, 2025

Summary of https://www.sciencedirect.com/science/article/pii/S0167811625000114
Presents a meta-analysis of two decades of studies examining consumer resistance to artificial intelligence (AI). The authors synthesize findings from hundreds of studies with over 76,000 participants, revealing that AI aversion is context-dependent and varies based on the AI's label, application domain, and perceived characteristics.
Interestingly, the study finds that negative consumer responses have decreased over time, particularly for cognitive evaluations of AI. Furthermore, the meta-analysis indicates that research design choices influence observed AI resistance, with studies using more ecologically valid methods showing less aversion.
Consumers exhibit an overall small but statistically significant aversion to AI (average Cohen’s d = -0.21). This means that, on average, people tend to respond more negatively to outputs or decisions labeled as coming from AI compared to those labeled as coming from humans.
Consumer aversion to AI is strongly context-dependent, varying significantly by the AI label and the application domain. Embodied forms of AI, such as robots, elicit the most negative responses (d = -0.83) compared to AI assistants or mere algorithms. Furthermore, domains involving higher stakes and risks, like transportation and public safety, trigger more negative responses than domains focused on productivity and performance, such as business and management.
Consumer responses to AI are not static and have evolved over time, generally becoming less negative, particularly for cognitive evaluations (e.g., performance or competence judgements). While initial excitement around generative AI in 2021 led to a near null-effect in cognitive evaluations, affective and behavioral responses still remain significantly negative overall.
The characteristics ascribed to AI significantly influence consumer responses. Negative responses are stronger when AI is described as having high autonomy (d = -0.28), inferior performance (d = -0.53), lacking human-like cues (anthropomorphism) (d = -0.23), and not recognizing the user's uniqueness (d = -0.24). Conversely, limiting AI autonomy, highlighting superior performance, incorporating anthropomorphic cues, and emphasizing uniqueness recognition can alleviate AI aversion.
The methodology used to study AI aversion impacts the findings. Studies with greater ecological validity, such as field studies, those using incentive-compatible designs, perceptually rich stimuli, clear explanations of AI, and behavioral (rather than self-report) measures, document significantly smaller aversion towards AI. This suggests that some documented resistance in purely hypothetical lab settings might be an overestimation of real-world aversion.

Thursday Mar 20, 2025

Summary of https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/
This Center for Security and Emerging Technology issue brief examines how researchers evaluate explainability and interpretability in AI-enabled recommendation systems. The authors' literature review reveals inconsistencies in defining these terms and a primary focus on assessing system correctness (building systems right) over system effectiveness (building the right systems for users).
They identified five common evaluation approaches used by researchers, noting a strong preference for case studies and comparative evaluations. Ultimately, the brief suggests that without clearer standards and expertise in evaluating AI safety, policies promoting explainable AI may fall short of their intended impact.
Researchers do not clearly differentiate between explainability and interpretability when describing these concepts in the context of AI-enabled recommendation systems. The descriptions of these principles in research papers often use a combination of similar themes. This lack of consistent definition can lead to confusion and inconsistent application of these principles.
The study identified five common evaluation approaches used by researchers for explainability claims: case studies, comparative evaluations, parameter tuning, surveys, and operational evaluations. These approaches can assess either system correctness (whether the system is built according to specifications) or system effectiveness (whether the system works as intended in the real world).
Research papers show a strong preference for evaluations of system correctness over evaluations of system effectiveness. Case studies, comparative evaluations, and parameter tuning, which are primarily focused on testing system correctness, were the most common approaches. In contrast, surveys and operational evaluations, which aim to test system effectiveness, were less prevalent.
Researchers adopt various descriptive approaches for explainability, which can be categorized into descriptions that rely on other principles (like transparency), focus on technical implementation, state the purpose as providing a rationale for recommendations, or articulate the intended outcomes of explainable systems.
The findings suggest that policies for implementing or evaluating explainable AI may not be effective without clear standards and expert guidance. Policymakers are advised to invest in standards for AI safety evaluations and develop a workforce capable of assessing the efficacy of these evaluations in different contexts to ensure reported evaluations provide meaningful information.

Thursday Mar 20, 2025

Summary of https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-502c-4139-8bf2-56eb4b65c58a.pdf
Investigates the economic value of open source software (OSS) by estimating both the supply-side (creation cost) and the significantly larger demand-side (usage value). Utilizing unique global data on OSS usage by firms, the authors calculate the cost to recreate widely used OSS and the replacement value for firms if OSS did not exist.
Their findings reveal a substantial multi-trillion dollar demand-side value, far exceeding the billions needed for recreation, highlighting OSS's critical, often unmeasured, role in the modern economy. The study also examines the concentration of value creation among a small percentage of developers and the distribution of OSS value across different programming languages and industries.
This study estimates that the demand-side value of widely-used open source software (OSS) is significantly larger than its supply-side value. The researchers estimate the supply-side value (the cost to recreate the most widely used OSS once) to be $4.15 billion, while the demand-side value (the replacement value for each firm that uses the software and would need to build it internally if OSS did not exist) is estimated to be much larger at $8.8 trillion. This highlights the substantial economic benefit derived from the reuse of OSS by numerous firms.
The research reveals substantial heterogeneity in the value of OSS across different programming languages. For example, in terms of demand-side value, Go is estimated to be more than four times the value of the next language, JavaScript, while Python has a considerably lower value among the top languages analyzed. This indicates that the economic impact of OSS is not evenly distributed across the programming language landscape.
The study finds a high concentration in the creation of OSS value, with only a small fraction of developers contributing the vast majority of the value. Specifically, it's estimated that 96% of the demand-side value is created by only 5% of OSS developers. These top contributors also tend to contribute to a substantial number of repositories, suggesting their impact is broad across the OSS ecosystem.
Measuring the value of OSS is inherently difficult due to its non-pecuniary (free) nature and the lack of centralized usage tracking. This study addresses this challenge by leveraging unique global data from two complementary sources: the Census II of Free and Open Source Software – Application Libraries and the BuiltWith dataset, which together capture OSS usage by millions of global firms. By focusing on widely-used OSS, the study aims to provide a more precise understanding of its value compared to studies that estimate the replacement cost of all existing OSS.
The estimated demand-side value of OSS suggests that if it did not exist, firms would need to spend approximately 3.5 times more on software than they currently do. This underscores the massive cost savings and productivity enhancement that the existence of OSS provides to the economy. The study argues that recognizing this value is crucial for the future health of the digital economy and for informing policymakers about the importance of supporting the OSS ecosystem.

Thursday Mar 20, 2025

Summary of https://www.hoover.org/sites/default/files/research/docs/cgri-closer-look-110-ai.pdf
Examines the potential impact of artificial intelligence on corporate boardrooms and governance. It argues that while AI's influence on areas like decision-making is acknowledged, its capacity to reshape the operations and practices of the board itself warrants greater attention.
The authors explore how AI could alter board functions, information processing, interactions with management, and the role of advisors, while also considering the challenges of maintaining board-management boundaries and managing information access. Ultimately, the piece discusses how AI could transform various governance obligations and presents both the benefits and risks associated with its adoption in the boardroom.
AI has the potential to significantly transform corporate governance by reshaping how boards function, process information, interact with management and advisors, and fulfill specific governance obligations. Boards are already aware of AI's potential, ranking its increased use across the organization as a top priority.
AI can reduce the information asymmetry between the board and management by increasing the volume, type, and quality of information available to directors. This allows boards to be more proactive and less reliant on management-provided information, potentially leading to better oversight. AI tools can enable directors to search and synthesize public and private information more easily.
The adoption of AI will significantly increase the expectations and responsibilities of board members. Directors will be expected to spend more time preparing for meetings by reviewing and analyzing a greater quantity of information. They will also be expected to ask higher-quality questions and provide deeper insights, leveraging AI tools for analysis and benchmarking.
AI can enhance various governance functions, including strategy, compensation, human capital management, audit, legal matters, and board evaluations. For example, AI can facilitate richer scenario planning, provide real-time compensation benchmarking, identify skills gaps in human capital, detect potential fraud, monitor legal developments, and analyze board effectiveness. This may also lead to a supplementation or replacement of work currently done by paid advisors.
The integration of AI into the boardroom also presents several risks and challenges, including maintaining the separation of board and management responsibilities, managing information access, ensuring data security, addressing the potential for errors and biases in AI models, and avoiding "analysis paralysis". Boards will need to develop new protocols and skills to effectively utilize AI while mitigating these risks.

Sunday Mar 16, 2025

Summary of https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097445
This working paper by De Freitas et al. investigates why people resist forming relationships with AI companions, despite their potential to alleviate loneliness. The authors reveal that while individuals acknowledge AI's superior availability and non-judgmental nature compared to humans, they do not consider AI relationships to be "true" due to a perceived lack of essential qualities like mutual caring and emotional understanding. Through several studies, the research demonstrates that this resistance stems from a belief that AI cannot truly understand or feel emotions, leading to the perception of one-sided relationships.
Even direct interaction with AI companions only marginally increases acceptance by improving perceptions of superficial features, failing to alter deeply held beliefs about AI's inability to fulfill core relational values. Ultimately, the paper highlights significant psychological barriers hindering the widespread adoption of AI companions for social connection.
People exhibit resistance to adopting AI companions despite acknowledging their superior capabilities in certain relationship-relevant aspects like availability and being non-judgmental. This resistance stems from the belief that AI companions are incapable of realizing the essential values of relationships, such as mutual caring and emotional understanding.
This resistance is rooted in a dual character concept of relationships, where people differentiate between superficial features and essential values. Even if AI companions possess the superficial features (e.g., constant availability), they are perceived as lacking the essential values (e.g., mutual caring), leading to the judgment that relationships with them are not "true" relationships.
The belief that AI companions cannot realize essential relationship values is linked to perceptions of AI's deficiencies in mental capabilities, specifically the ability to understand and feel emotions, which are seen as crucial for mutual caring and thus for a relationship to be considered mutual and "true". Physical intimacy was not found to be a significant mediator in this belief.
Interacting with an AI companion can increase willingness to engage with it for friendship and romance, primarily by improving perceptions of its advertised, more superficial capabilities (like being non-judgmental and available). However, such interaction does not significantly alter the fundamental belief that AI is incapable of realizing the essential values of relationships. The mere belief that one is interacting with a human (even when it's an AI) enhances the effectiveness of the interaction in increasing acceptance.
The strong, persistent belief about AI's inability to fulfill the essential values of relationships represents a significant psychological barrier to the widespread adoption of AI companions for reducing loneliness. This suggests that the potential loneliness-reducing benefits of AI companions may be difficult to achieve in practice unless these fundamental beliefs can be addressed. The resistance observed in the relationship domain, where values are considered essential, might be stronger than in task-based domains where performance is the primary concern.

Sunday Mar 16, 2025

Summary of https://cdn.prod.website-files.com/65af2088cac9fb1fb621091f/67aaca031ed677c879434284_Final_US%20Open-Source%20AI%20Governance.pdf
This document from the Center for AI Policy and Yale Digital Ethics Center examines the contentious debate surrounding the governance of open-source artificial intelligence in the United States. It highlights the tension between the ideological values promoting open access and geopolitical considerations, particularly competition with China.
The authors analyze various policy proposals for open-source AI, creating a rubric that combines ideological factors like transparency and innovation with geopolitical risks such as misuse and global power dynamics. Ultimately, the paper suggests targeted policy interventions over broad restrictions to balance the benefits of open-source AI with national security concerns, emphasizing ongoing monitoring of technological advancements and geopolitical landscapes.
The debate surrounding open-source AI regulation involves a tension between ideological values (innovation, transparency, power distribution) and geopolitical considerations, particularly US-China competition (Chinese misuse, backdoor risks, global power dynamics). Policymakers are grappling with how to reconcile these two perspectives, especially in light of advancements in Chinese open-source AI.
Heavy-handed regulation like blanket export controls on all open-source AI models is likely sub-optimal and counterproductive. Such controls would significantly disrupt the development of specific-use applications, have limited efficacy against Chinese misuse, and could undermine US global power by discouraging international use of American technology.
More targeted interventions are suggested as preferable to broad restrictions. The paper analyzes policies such as industry-led risk assessments for model release and government funding for an open-source repository of security audits. These approaches aim to balance the benefits of open-source AI with the need to address specific security risks more effectively and with less disruption to innovation.
The nature of open-source AI, being globally accessible information, makes it inherently difficult to decouple the US and Chinese ecosystems. Attempts to do so through export controls may have unintended consequences and could be circumvented due to the ease of information transfer.
Further research and monitoring are crucial to inform future policy decisions. Key areas for ongoing attention include tracking the performance gap between open and closed models, understanding the origins of algorithmic innovations, developing objective benchmarks for comparing models from different countries, and advancing technical safety mitigations for open models.

Copyright 2024 All rights reserved.

Podcast Powered By Podbean

Version: 20241125