Episodes

Thursday Mar 20, 2025
Thursday Mar 20, 2025
Summary of https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-502c-4139-8bf2-56eb4b65c58a.pdf
Investigates the economic value of open source software (OSS) by estimating both the supply-side (creation cost) and the significantly larger demand-side (usage value). Utilizing unique global data on OSS usage by firms, the authors calculate the cost to recreate widely used OSS and the replacement value for firms if OSS did not exist.
Their findings reveal a substantial multi-trillion dollar demand-side value, far exceeding the billions needed for recreation, highlighting OSS's critical, often unmeasured, role in the modern economy. The study also examines the concentration of value creation among a small percentage of developers and the distribution of OSS value across different programming languages and industries.
This study estimates that the demand-side value of widely-used open source software (OSS) is significantly larger than its supply-side value. The researchers estimate the supply-side value (the cost to recreate the most widely used OSS once) to be $4.15 billion, while the demand-side value (the replacement value for each firm that uses the software and would need to build it internally if OSS did not exist) is estimated to be much larger at $8.8 trillion. This highlights the substantial economic benefit derived from the reuse of OSS by numerous firms.
The research reveals substantial heterogeneity in the value of OSS across different programming languages. For example, in terms of demand-side value, Go is estimated to be more than four times the value of the next language, JavaScript, while Python has a considerably lower value among the top languages analyzed. This indicates that the economic impact of OSS is not evenly distributed across the programming language landscape.
The study finds a high concentration in the creation of OSS value, with only a small fraction of developers contributing the vast majority of the value. Specifically, it's estimated that 96% of the demand-side value is created by only 5% of OSS developers. These top contributors also tend to contribute to a substantial number of repositories, suggesting their impact is broad across the OSS ecosystem.
Measuring the value of OSS is inherently difficult due to its non-pecuniary (free) nature and the lack of centralized usage tracking. This study addresses this challenge by leveraging unique global data from two complementary sources: the Census II of Free and Open Source Software – Application Libraries and the BuiltWith dataset, which together capture OSS usage by millions of global firms. By focusing on widely-used OSS, the study aims to provide a more precise understanding of its value compared to studies that estimate the replacement cost of all existing OSS.
The estimated demand-side value of OSS suggests that if it did not exist, firms would need to spend approximately 3.5 times more on software than they currently do. This underscores the massive cost savings and productivity enhancement that the existence of OSS provides to the economy. The study argues that recognizing this value is crucial for the future health of the digital economy and for informing policymakers about the importance of supporting the OSS ecosystem.

Thursday Mar 20, 2025
Thursday Mar 20, 2025
Summary of https://www.hoover.org/sites/default/files/research/docs/cgri-closer-look-110-ai.pdf
Examines the potential impact of artificial intelligence on corporate boardrooms and governance. It argues that while AI's influence on areas like decision-making is acknowledged, its capacity to reshape the operations and practices of the board itself warrants greater attention.
The authors explore how AI could alter board functions, information processing, interactions with management, and the role of advisors, while also considering the challenges of maintaining board-management boundaries and managing information access. Ultimately, the piece discusses how AI could transform various governance obligations and presents both the benefits and risks associated with its adoption in the boardroom.
AI has the potential to significantly transform corporate governance by reshaping how boards function, process information, interact with management and advisors, and fulfill specific governance obligations. Boards are already aware of AI's potential, ranking its increased use across the organization as a top priority.
AI can reduce the information asymmetry between the board and management by increasing the volume, type, and quality of information available to directors. This allows boards to be more proactive and less reliant on management-provided information, potentially leading to better oversight. AI tools can enable directors to search and synthesize public and private information more easily.
The adoption of AI will significantly increase the expectations and responsibilities of board members. Directors will be expected to spend more time preparing for meetings by reviewing and analyzing a greater quantity of information. They will also be expected to ask higher-quality questions and provide deeper insights, leveraging AI tools for analysis and benchmarking.
AI can enhance various governance functions, including strategy, compensation, human capital management, audit, legal matters, and board evaluations. For example, AI can facilitate richer scenario planning, provide real-time compensation benchmarking, identify skills gaps in human capital, detect potential fraud, monitor legal developments, and analyze board effectiveness. This may also lead to a supplementation or replacement of work currently done by paid advisors.
The integration of AI into the boardroom also presents several risks and challenges, including maintaining the separation of board and management responsibilities, managing information access, ensuring data security, addressing the potential for errors and biases in AI models, and avoiding "analysis paralysis". Boards will need to develop new protocols and skills to effectively utilize AI while mitigating these risks.

Sunday Mar 16, 2025
Sunday Mar 16, 2025
Summary of https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097445
This working paper by De Freitas et al. investigates why people resist forming relationships with AI companions, despite their potential to alleviate loneliness. The authors reveal that while individuals acknowledge AI's superior availability and non-judgmental nature compared to humans, they do not consider AI relationships to be "true" due to a perceived lack of essential qualities like mutual caring and emotional understanding. Through several studies, the research demonstrates that this resistance stems from a belief that AI cannot truly understand or feel emotions, leading to the perception of one-sided relationships.
Even direct interaction with AI companions only marginally increases acceptance by improving perceptions of superficial features, failing to alter deeply held beliefs about AI's inability to fulfill core relational values. Ultimately, the paper highlights significant psychological barriers hindering the widespread adoption of AI companions for social connection.
People exhibit resistance to adopting AI companions despite acknowledging their superior capabilities in certain relationship-relevant aspects like availability and being non-judgmental. This resistance stems from the belief that AI companions are incapable of realizing the essential values of relationships, such as mutual caring and emotional understanding.
This resistance is rooted in a dual character concept of relationships, where people differentiate between superficial features and essential values. Even if AI companions possess the superficial features (e.g., constant availability), they are perceived as lacking the essential values (e.g., mutual caring), leading to the judgment that relationships with them are not "true" relationships.
The belief that AI companions cannot realize essential relationship values is linked to perceptions of AI's deficiencies in mental capabilities, specifically the ability to understand and feel emotions, which are seen as crucial for mutual caring and thus for a relationship to be considered mutual and "true". Physical intimacy was not found to be a significant mediator in this belief.
Interacting with an AI companion can increase willingness to engage with it for friendship and romance, primarily by improving perceptions of its advertised, more superficial capabilities (like being non-judgmental and available). However, such interaction does not significantly alter the fundamental belief that AI is incapable of realizing the essential values of relationships. The mere belief that one is interacting with a human (even when it's an AI) enhances the effectiveness of the interaction in increasing acceptance.
The strong, persistent belief about AI's inability to fulfill the essential values of relationships represents a significant psychological barrier to the widespread adoption of AI companions for reducing loneliness. This suggests that the potential loneliness-reducing benefits of AI companions may be difficult to achieve in practice unless these fundamental beliefs can be addressed. The resistance observed in the relationship domain, where values are considered essential, might be stronger than in task-based domains where performance is the primary concern.

Sunday Mar 16, 2025
Sunday Mar 16, 2025
Summary of https://cdn.prod.website-files.com/65af2088cac9fb1fb621091f/67aaca031ed677c879434284_Final_US%20Open-Source%20AI%20Governance.pdf
This document from the Center for AI Policy and Yale Digital Ethics Center examines the contentious debate surrounding the governance of open-source artificial intelligence in the United States. It highlights the tension between the ideological values promoting open access and geopolitical considerations, particularly competition with China.
The authors analyze various policy proposals for open-source AI, creating a rubric that combines ideological factors like transparency and innovation with geopolitical risks such as misuse and global power dynamics. Ultimately, the paper suggests targeted policy interventions over broad restrictions to balance the benefits of open-source AI with national security concerns, emphasizing ongoing monitoring of technological advancements and geopolitical landscapes.
The debate surrounding open-source AI regulation involves a tension between ideological values (innovation, transparency, power distribution) and geopolitical considerations, particularly US-China competition (Chinese misuse, backdoor risks, global power dynamics). Policymakers are grappling with how to reconcile these two perspectives, especially in light of advancements in Chinese open-source AI.
Heavy-handed regulation like blanket export controls on all open-source AI models is likely sub-optimal and counterproductive. Such controls would significantly disrupt the development of specific-use applications, have limited efficacy against Chinese misuse, and could undermine US global power by discouraging international use of American technology.
More targeted interventions are suggested as preferable to broad restrictions. The paper analyzes policies such as industry-led risk assessments for model release and government funding for an open-source repository of security audits. These approaches aim to balance the benefits of open-source AI with the need to address specific security risks more effectively and with less disruption to innovation.
The nature of open-source AI, being globally accessible information, makes it inherently difficult to decouple the US and Chinese ecosystems. Attempts to do so through export controls may have unintended consequences and could be circumvented due to the ease of information transfer.
Further research and monitoring are crucial to inform future policy decisions. Key areas for ongoing attention include tracking the performance gap between open and closed models, understanding the origins of algorithmic innovations, developing objective benchmarks for comparing models from different countries, and advancing technical safety mitigations for open models.

Sunday Mar 16, 2025
Sunday Mar 16, 2025
Summary of https://arxiv.org/pdf/2503.05628
This expert strategy document from Dan Hendrycks, Eric Schmidt and Alexander Wang addresses the national security implications of rapidly advancing AI, particularly the anticipated emergence of superintelligence.
The authors propose a three-pronged framework drawing parallels with Cold War strategies: deterrence through the concept of Mutual Assured AI Malfunction (MAIM), nonproliferation to restrict access for rogue actors, and competitiveness to bolster national strength.
The text examines threats from rival states, terrorists, and uncontrolled AI, arguing for proactive measures like cyber espionage and sabotage for deterrence, export controls and information security for nonproliferation, and domestic AI chip manufacturing and legal frameworks for competitiveness. Ultimately, the document advocates for a risk-conscious, multipolar strategy to navigate the transformative and potentially perilous landscape of advanced artificial intelligence.
Rapid advances in AI, especially the anticipation of superintelligence, present significant national security challenges akin to those posed by nuclear weapons. The dual-use nature of AI means it can be leveraged for both economic and military dominance by states, while also enabling rogue actors to develop bioweapons and launch cyberattacks. The potential for loss of control over advanced AI systems further amplifies these risks.
The concept of Mutual Assured AI Malfunction (MAIM) is introduced as a likely default deterrence regime. This is similar to nuclear Mutual Assured Destruction (MAD), where any aggressive pursuit of unilateral AI dominance by a state would likely be met with preventive sabotage by its rivals, ranging from cyberattacks to potential kinetic strikes on AI infrastructure.
A critical component of a superintelligence strategy is nonproliferation. Drawing from precedents in restricting weapons of mass destruction, this involves three key levers: compute security to track and control the distribution of high-end AI chips, information security to protect sensitive AI research and model weights from falling into the wrong hands, and AI security to implement safeguards that prevent the malicious use and loss of control of AI systems.
Beyond mitigating risks, states must also focus on competitiveness in the age of AI to ensure their national strength. This includes strategically integrating AI into military command and control and securing drone supply chains, guaranteeing access to AI chips through domestic manufacturing and strategic export controls, establishing legal frameworks to govern AI agents, and maintaining political stability in the face of rapid automation and the spread of misinformation.
Existing strategies for dealing with advanced AI, such as a completely hands-off approach, voluntary moratoria, or a unilateral pursuit of a strategic monopoly, are flawed and insufficient to address the multifaceted risks and opportunities presented by AI. The authors propose a multipolar strategy based on the interconnected pillars of deterrence (MAIM), nonproliferation, and competitiveness, drawing lessons from the Cold War framework adapted to the unique challenges of superintelligence.

Thursday Mar 13, 2025
Thursday Mar 13, 2025
Summary of https://www.sciencedirect.com/science/article/pii/S2666920X24001516
This paper examines how higher education institutions globally are addressing the integration of generative AI by analyzing the adoption policies of 40 universities across six regions through the lens of the Diffusion of Innovations Theory.
The study identifies key themes related to compatibility, trialability, and observability of AI, the communication channels being used, and the defined roles and responsibilities for faculty, students, and administrators.
Findings reveal a widespread emphasis on academic integrity and enhancing learning, but also highlight gaps in comprehensive policies and equitable access, offering insights for policymakers to develop inclusive AI integration strategies.
Universities globally are proactively addressing the integration of generative AI (GAI) in higher education, primarily focusing on academic integrity, enhancing teaching and learning, and promoting AI literacy. This is evidenced by the emphasis on these themes in the analysis of policies across 40 universities from six global regions. The study highlights that institutions recognize the transformative potential of GAI while also being concerned about its ethical implications and impact on traditional educational values.
The study, utilizing the Diffusion of Innovations Theory (DIT), reveals that while universities are exploring GAI's compatibility, trialability, and observability, significant gaps exist in comprehensive policy frameworks, particularly concerning data privacy and equitable access. The research specifically investigated these innovation characteristics in university policies. Although many universities address academic integrity and the potential for enhancing education (compatibility), and are encouraging experimentation (trialability), fewer have robust strategies for evaluating GAI's impact (observability) and clear guidelines for data privacy and equal access.
Communication about GAI adoption is varied, with digital platforms being the most common channel, but less than half of the studied universities demonstrate a comprehensive approach to disseminating information and fostering dialogue among stakeholders. The analysis identified five main communication channels: digital platforms, interactive learning and engagement channels, direct and personalized communication channels, collaborative and social networks, and advisory, monitoring, and feedback channels. The finding that not all universities actively use a range of these channels suggests a need for more focused efforts in this area.
Higher education institutions are establishing clear roles and responsibilities for faculty, students, and administrators in the context of GAI adoption. Faculty are largely tasked with integrating GAI into curricula and ensuring ethical use, students are responsible for ethical use and maintaining academic integrity, and administrators are primarily involved in policy development, implementation, and providing support. This highlights a structured approach to managing the integration of GAI within the educational ecosystem.
Cultural backgrounds may influence the emphasis of GAI adoption policies, with institutions in North America and Europe often prioritizing innovation and critical thinking, while those in Asia emphasize ethical use and compliance, and universities in Africa and Latin America focus on equity and accessibility.This regional variation suggests that while there are common values, the specific challenges and priorities related to GAI adoption can differ based on cultural and socio-economic contexts.

Thursday Mar 13, 2025
Thursday Mar 13, 2025
Summary of https://unesdoc.unesco.org/ark:/48223/pf0000391105
This UNESCO publication presents a global framework for AI competency in students. Recognizing the increasing role of AI, it argues for proactive education to prepare responsible users and co-creators.
The framework outlines twelve competencies across four dimensions: human-centered mindset, ethics of AI, AI techniques and applications, and AI system design, each with three progression levels. It aims to guide educators in integrating AI learning objectives into curricula, emphasizing critical judgment, ethical awareness, foundational knowledge, and inclusive design.
The document also discusses implementation strategies, teacher professionalization, pedagogical approaches, and competency-based assessments for AI education.
The UNESCO AI competency framework for students aims to equip students with the values, knowledge, and skills necessary to thrive in the AI era, becoming responsible and creative citizens. It is the first global framework of its kind, intended to support the development of core competencies for students to critically examine and understand AI from holistic perspectives, including ethical, social, and technical dimensions.
The framework is structured around 12 competencies spanning four dimensions: Human-centred mindset, Ethics of AI, AI techniques and applications, and AI system design, across three progression levels: Understand, Apply, and Create. This structure is designed to provide a spiral learning sequence across grade levels, helping students progressively build a systematic and transferable understanding of AI competencies.
The framework is grounded in key principles that include fostering a critical approach to AI, prioritizing human-centred interaction with AI, encouraging environmentally sustainable AI, promoting inclusivity in AI competency development, and building core AI competencies for lifelong learning. It embodies UNESCO's mandate by anchoring its vision of AI and education in principles of human rights, inclusion, and equity.
The primary target audience for the AI CFS includes policy-makers, curriculum developers, providers of education programmes on AI for students, school leaders, teachers, and educational experts. The framework is intended to serve as a guide for public education systems to build the competencies required for the effective implementation of national AI strategies and the creation of inclusive, just, and sustainable futures. It is designed as a global reference that needs to be tailored to the diverse readiness levels of local education systems.
The framework envisions students as active co-creators of AI and responsible citizens. It emphasizes the importance of critical judgment of AI solutions, awareness of citizenship responsibilities in the era of AI, foundational AI knowledge for lifelong learning, and inclusive, sustainable AI design. Ultimately, the AI CFS aims to prepare students to not only use AI effectively and ethically but also to contribute to shaping its future development and relationship with society.

Thursday Mar 13, 2025
Thursday Mar 13, 2025
Summary of https://media.licdn.com/dms/document/media/v2/D561FAQHEys4iGQj7CA/feedshare-document-pdf-analyzed/B56ZUN7jLFHQAY-/0/1739695481660?e=1743033600&v=beta&t=nLUoVEs06lwzFgHpx8DbIfd6nMyvXem1ZrpqPSChhiA
"Agentic AI – the new frontier in GenAI," explores the transformative potential of agentic artificial intelligence, particularly within the realm of generative AI. It highlights how autonomous AI systems, capable of making decisions and acting with limited human input, are evolving through machine learning and multimodal data processing to automate complex tasks and optimize workflows.
The text emphasizes the strategic imperative for organizations to adopt this technology early to gain competitive advantages, improve efficiency, enhance customer experiences, and drive revenue growth, providing numerous real-world examples across various industries and business functions.
It also discusses key considerations for implementing agentic AI, including strategic planning, technological infrastructure, data readiness, talent acquisition, and ethical implications, alongside a comparison of commercial and open-source tools.
Ultimately, the document positions agentic AI as a crucial element for future business success, requiring a strategic vision and commitment to realize its full potential in an increasingly AI-driven world.
Agentic AI, with its advanced human-like reasoning and interaction capabilities, is transforming various sectors including manufacturing, healthcare, finance, retail, transportation, and energy.
Organisations' AI strategies should leverage multimodal GenAI capabilities while ensuring ethical AI safeguards to drive autonomous process re-engineering and enhanced decision-making across all business areas.
When integrated effectively, agentic AI can enhance efficiency, lower costs, improve customer experience, and drive revenue growth.
Agentic AI systems possess the capacity to make autonomous decisions and take actions to achieve specific goals with limited or no direct human intervention, exhibiting key aspects like autonomy, goal-oriented behaviour, environment interaction, learning capability, workflow optimisation, and multi-agent and system conversation.
The evolution of agentic AI has progressed through the integration of machine learning for data learning and NLP-enabled user interactions, the introduction of multimodality combining various data types for enhanced interactions, and the development of advanced autonomy and real-time interactions enabling human-like reasoning and independent decision-making.

Tuesday Mar 04, 2025
Tuesday Mar 04, 2025
Summary of https://www.hbs.edu/ris/Publication%20Files/25-023_8ee1f38f-d949-4b49-80c8-c7a736f2c27b.pdf
Examines the gender gap in the adoption and usage of generative AI tools across the globe.Synthesizing data from 18 studies involving over 140,000 individuals, the authors reveal a consistent pattern: women are less likely than men to use generative AI.
This gap persists even when access to these technologies is equalized, suggesting deeper underlying causes. Analysis of internet traffic data and mobile app downloads further supports these findings, indicating a skewed gender distribution among users of popular AI platforms.
The research explores potential mechanisms behind this disparity, such as differences in knowledge, confidence, and perceptions of AI's ethical implications. The authors caution that this gender gap could lead to biased AI systems and exacerbate existing inequalities, emphasizing the need for targeted interventions.
The most prominent explanations behind the gender gap in generative AI adoption are:
Lower familiarity and knowledge Women consistently report less familiarity with generative AI tools. They are also more likely to report not knowing how to use AI tools.
Lower confidence and persistence Women show less confidence in their ability to use AI tools effectively. They are also less persistent when using generative AI, being less likely to attempt prompting multiple times for desired results.
Perception of unethical use Women are more likely to perceive the use of AI in coursework or assignments as unethical or as cheating.
Mixed perceptions of benefits Studies show mixed results regarding whether men and women equally perceive the benefits and usefulness of generative AI. Some studies indicate women perceive lower productivity benefits and are less likely to see generative AI as useful in job searches or educational settings.
No significant differences in trust or risk perception The study indicates that gender differences in generative AI adoption are likely driven by disparities in knowledge, familiarity, and confidence, rather than differences in trust or risk perceptions. There are no statistically significant differences in men and women trusting the accuracy of generative AI, or in expressing concerns about risks such as data breaches or job redundancy.

Tuesday Mar 04, 2025
Tuesday Mar 04, 2025
Summary of https://re-ai.berkeley.edu/sites/default/files/responsible_use_of_generative_ai_uc_berkeley_2025.pdf
A playbook for product managers and business leaders seeking to responsibly use generative AI (genAI) in their work and products. It emphasizes proactively addressing risks like data privacy, inaccuracy, and bias to build trust and maintain accountability.
The playbook outlines ten actionable plays for organizational leaders and product managers to integrate responsible AI practices, improve transparency, and mitigate potential harms. It underscores the business benefits of responsible AI, including enhanced brand reputation and regulatory compliance.
Ultimately, the playbook aims to help organizations and individuals capitalize on genAI's potential while ensuring its ethical and sustainable implementation.
GenAI has diverse applications and is used for automating work, generating content, transcribing voice, and powering new products and features.
Organizations can use different genAI models. These include off-the-shelf tools, enterprise solutions, or open models, which can be customized for specific needs and products.
Adoption of genAI can lead to increased productivity and efficiency. Organizations that address the risks associated with genAI are best positioned to capitalize on the benefits. Responsible AI practices can foster a positive brand image and customer loyalty.
There are key risks product managers need to consider when using genAI, especially regarding data privacy, transparency, inaccuracy, bias, safety, and security.
There are several challenges to using genAI responsibly, including a lack of organizational policies and individual education, the immaturity of the industry, and the replication of inequitable patterns that exist in society.





