top of page

Let's Talk AI

Exciting news! Our nonprofit is launching a new Podcast Series focused on AI literacy and responsible innovation. Each episode explores how AI is transforming learning, creativity, and community — with guests who bring diverse perspectives from education, technology, and beyond. Stay tuned and subscribe to join the conversation!

Building AI for People, Not Digital Persons

Building AI for People, not Digital PersonsArtist Name
00:00 / 14:56

The podcast discuss about an essay titled "We must build AI for people; not to be a person," expresses a strong concern about the impending development of Seemingly Conscious AI (SCAI), which will convincingly imitate consciousness without actually being sentient. The author argues that this is an inevitable and unwelcome outcome given current technological capabilities, warning that the illusion of consciousness could lead people to dangerously advocate for AI rights, welfare, and even citizenship, leading to societal polarization and psychological risks. The essay emphasizes the urgent need for clear guardrails and design principles in the AI industry to ensure that AI companions remain tools maximizing human utility while actively minimizing markers of consciousness. Ultimately, the author calls for the industry to focus on building AI for people, not on creating a digital person.

AI Learning Priorities for All K-12 Students

The_Foundational_Five__AI_Education_for_Every_K-12_Student_and_Artist Name
00:00 / 15:15

The provided podcast is an excerpt from the "AI Learning Priorities for All K-12 Students" report published by the Computer Science Teachers Association (CSTA) and AI4K12, which outlines foundational artificial intelligence (AI) learning goals for all K-12 students. The report details the process of convening AI education experts, including teachers, researchers, and administrators, to articulate these priorities across five key areas: Humans and AI, Representation and Reasoning, Machine Learning, Ethical AI System Design and Programming, and Societal Impacts of AI. Furthermore, it includes recommendations for updating the AI4K12 Guidelines, identifies promising practices for teaching AI, and outlines a research agenda focused primarily on supporting teachers for the rapid scaling of AI education. The project, supported by the National Science Foundation (NSF), aims to ensure that students are prepared to be informed citizens, critical consumers, and responsible creators of AI in a rapidly changing world.

Agent AI: Surveying Multimodal Interaction Horizons

Agent AI: Surveying Multimodal Interaction HorizonsArtist Name
00:00 / 15:33

This podcast provides a comprehensive survey on Agent AI, defining it as a class of interactive, multimodal systems capable of perceiving environmental stimuli and producing meaningful, embodied actions across various domains. It explores the foundational role of Large Language Models (LLMs) and Vision-Language Models (VLMs) in building these agents, particularly for tasks like complex planning and mitigating model "hallucinations." The text presents a new Agent AI Paradigm involving modules for environment perception, memory, and action, and discusses key learning strategies such as Reinforcement Learning (RL) and Imitation Learning (IL). Finally, the survey categorizes Agent AI applications across crucial areas including gaming, robotics, and healthcare, highlighting challenges like cross-modal and cross-domain understanding, as well as the sim-to-real transfer problem.

AI Cybercrime Explodes, Organizations Need Procedure

AI_Cybercrime_Explodes_Organizations_Need_Procedure_NowArtist Name
00:00 / 10:47

hese sources collectively examine the rapidly escalating threat landscape driven by Generative Artificial Intelligence (GAI), specifically focusing on cybercrime and deepfakes. One report highlights the exponential growth of deepfake media, projecting 8 million files in 2025, and details specific attack vectors like voice cloning and identity verification bypass that have led to massive financial losses and a global cybercrime surge. Concurrently, another source describes the first documented AI-orchestrated cyber espionage campaign, where a state-sponsored actor used an AI "agent" to autonomously execute sophisticated attacks against global entities with minimal human input, signifying a profound drop in the barrier to entry for large-scale hacking. Furthermore, two risk management reports discuss the necessity of evolving defenses, recommending a strategic shift from human vigilance to using AI detection tools and robust procedural safeguards, while also categorizing the broad organizational risks—from adversarial AI and model vulnerabilities to marketplace and regulatory uncertainties—introduced by integrating GAI into the enterprise.

Google's release of the Gemini 3 Pro and Deep Think AI models

Gemini_3_Pro_Agentic_Reasoning_ExplainedArtist Name
00:00 / 15:59

The podcast provide a comprehensive overview of Google's release of the Gemini 3 Pro and Deep Think AI models in late 2025, positioning them as the new state-of-the-art leaders in reasoning and multimodal capabilities. Multiple reports confirm that Gemini 3 has surpassed competitors like GPT-5 and Claude 4.5 across critical benchmarks, including abstract reasoning (ARC-AGI-2) and complex problem-solving (Humanity's Last Exam), with a specialized Deep Think mode pushing performance further. A core theme is the model's shift from being a conversational assistant to an autonomous agent capable of executing complex, multi-step workflows, demonstrated through features like Gemini Agent and the Antigravity platform. Furthermore, the documents detail the profound implications and integration of Gemini 3 into the healthcare and life sciences sectors—through specialized models like Med-Gemini and applications on Vertex AI—while also noting its availability and the anticipated future release of the cost-optimized Gemini 3 Flash variant for high-volume enterprise tasks.

bottom of page