What about killer robots?

  • Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems

    Congressional Research Service · January 2025

    Lethal autonomous weapon systems (LAWS) are a special class of weapon systems that use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.
  • The Army's M-1E3 Abrams Tank Modernization Program

    Congressional Research Service · September 2025

    Some Abrams X features reportedly include reduced weight; a hybrid electric diesel engine 50% more fuel efficient than the current Abrams; an unmanned turret which would reduce the crew from four to three soldiers; enhanced armor to protect against bombs dropped by drones; ability to communicate with unmanned aerial vehicles; and an onboard AI system that could both alert the crew to long-range threats and prioritize fires against multiple threats.
  • The DoD Replicator Initiative: Background and Issues for Congress

    Congressional Research Service · September 2025

    Replicator, unveiled on August 28, 2023, is a Department of Defense initiative led by DOD's Defense Innovation Unit, to field thousands of uncrewed systems by August 2025. An issue is whether Replicator efforts would be executed in a manner consistent with DOD's ethical principles and international commitments.

Can you hack an AI?

  • Technical Blog: Strengthening AI Hijacking Evaluations

    NIST AI · January 2025

    AI agents could have a wide range of potential benefits, such as automating scientific research or serving as personal assistants. However, many AI agents are vulnerable to agent hijacking, a type of indirect prompt injection in which an attacker inserts malicious instructions into data that may be ingested by an AI agent.
  • CAISI Evaluation of DeepSeek AI Models Finds Shortcomings and Risks

    NIST AI · September 2025

    DeepSeek's most secure model (R1-0528) responded to 94% of overtly malicious requests when a common jailbreaking technique was used, compared with 8% of requests for U.S. reference models.
  • Examining Backdoor Data Poisoning at Scale

    UK AI Safety Institute · October 2025

    Data poisoning occurs when individuals distribute online content designed to corrupt an AI model's training data, potentially producing dangerous behaviours. It can be used to insert backdoors; specific phrases used to degrade system performance or even make models perform disallowed actions like exfiltrating sensitive data.
  • Managing Risks from Increasingly Capable Open-weight AI Systems

    UK AI Safety Institute · October 2025

    Open-weight models are harder to safeguard, because they can be shared and modified arbitrarily without oversight. This makes developing strong safety assurances harder.

Could we have AI doctors?

  • Artificial Intelligence (AI) in Healthcare

    Congressional Research Service · December 2024

    The use of properly trained AI tools in health care could provide many benefits, including reducing medical errors, improving diagnostics, and streamlining administrative functions. While AI technologies have the potential to improve health care, they may also introduce novel challenges and exacerbate existing ones if not properly overseen.

How are AI doing on Wall Street?

  • Artificial Intelligence in Capital Markets: Policy Issues

    Congressional Research Service · September 2025

    Common AI usage in capital markets include investment management and execution; client support such as robo-adviser service and chatbots; regulatory compliance such as AML and CFT reporting; and back-office functions like internal productivity and risk management. While AI offers potential benefits, its use in capital markets also raises policy concerns.

What's happening in schools?

  • Hand in Hand: Schools' Embrace of AI Connected to Increased Risks to Students

    Center for Democracy and Technology · October 2025

    Four emerging risks associated with AI in schools, all of which increase the more that a school uses AI: (1) Data breaches or ransomware attacks; (2) Tech-enabled sexual harassment and bullying; (3) AI systems that do not work as intended; and (4) Troubling interactions between students and technology.
  • Letter to the U.S. Department of Education on Responsible AI in K-12

    Center for Democracy and Technology · October 2025

    Fifty percent of students agree that using AI in class makes them feel less connected to their teacher. Thirty-eight percent of students agree that it is easier for them to talk to AI than their parents.

Will AI help make new scientific discoveries?

  • Teaching AI How Science Actually Works

    Institute for Progress · August 2025

    The dream of AI for science is straightforward: a legion of graduate students, technicians, and potentially senior researchers who could work 24/7 anywhere in the world, don't get bored or distracted, and don't require years of training before they are productive.
  • What Will AI Look Like in 2030?

    Epoch AI · September 2025

    By 2030, AI will be able to implement complex scientific software from natural language, assist mathematicians formalising proof sketches, and answer open-ended questions about biology protocols.

Will all the jobs go away?

  • The Macroeconomic Effects of Artificial Intelligence

    Congressional Research Service · April 2025

    In February 2024, businesses indicated an expected rate of AI use at 6.6% by fall 2024. One projection of future private U.S. investment in AI also indicates growth to $81.7 billion in 2025 from $47.4 billion in 2022. How much usage would be necessary to create structural shifts in the economy is uncertain.
  • GATE: Modeling the Trajectory of AI and Automation

    Epoch AI · March 2025

    The Growth and AI Transition Endogenous (GATE) model brings together machine learning and economic growth theory to illustrate the key dynamics of AI development, task automation and their downstream macroeconomic effects.
  • FUTURE UNSCRIPTED: Generative AI's Impact on Entertainment Jobs

    Concept Art Association · January 2024

    Almost two-thirds of the 300 business leaders surveyed expect GenAI to play a role in consolidating or replacing existing job titles in their business division over the next three years.

Can AI reason?

  • Evaluating Gemini 2.5 Deep Think's Math Capabilities

    Epoch AI · October 2025

    For most models we have evaluated, including Deep Think, the models performed worse on problems that were rated more demanding in terms of the precision and background knowledge needed.

Does AI have a personality?

  • Concepts in AI Governance: Personality vs. Personalization

    Future of Privacy Forum · September 2025

    Even without memory or data-driven personalization, the increasingly human-like qualities of interactive AI systems can evoke novel risks, including manipulation, over-reliance, and emotional dependency.

How do we monitor AI rollouts and catch bad AI?

  • The NIST AI Risk Management Framework

    NIST AI · August 2022

    The NIST AI Risk Management Framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
  • AILuminate: AI Risk and Reliability Benchmark from MLCommons

    MLCommons · April 2025

    AILuminate v1.0 is the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability. It evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior across 12 hazard categories.
  • 2025 Trends in U.S. State AI Legislation

    Future of Privacy Forum · July 2025

    As AI technologies rapidly integrate into key sectors, state policymakers are debating what sorts of rules should govern these tools — with impacts on innovation, consumer protection, and AI's diffusion in society.
  • The 2025 AI Index Report

    Stanford HAI · September 2025

    The AI Index equips policymakers, business leaders, and the public with rigorous, objective insights into AI's technical progress, economic influence, and societal impact.

What can the most powerful AI right now do?

  • gpt-oss-120b & gpt-oss-20b Model Card

    OpenAI · August 2025

    Two open-weight reasoning models that push the frontier of accuracy and inference cost, with strong agentic capabilities (deep research browsing, python tool use, and developer-provided functions). Released under Apache 2.0.
  • Competitive Programming with Large Reasoning Models

    OpenAI · February 2025

    The scaled-up, general-purpose o3 model surpasses specialized pipelines without relying on hand-crafted inference heuristics. Notably, o3 achieves a gold medal at the 2024 IOI and obtains a Codeforces rating on par with elite human competitors.

Could a chatbot talk you into harming yourself or others?

Are our social networks safe?

Are our elections safe?

  • AI in Federal Election Campaigns: Legal Background and Constitutional Considerations

    Congressional Research Service · August 2023

    Federal campaign finance law does not specifically regulate the use of artificial intelligence in political campaign advertising. There are questions about whether regulation of such ads would run afoul of the First Amendment.
  • Do Chatbots Inform or Misinform Voters?

    U.K. AI Safety Institute · September 2025

    As the UK went to the polls last summer, around one in eight voters turned to AI chatbots for answers. Despite widespread fears, we find little evidence that conversational AI makes people less informed.
  • The Help America Vote Act of 2002 (HAVA): Overview and Ongoing Role

    Congressional Research Service · September 2025

    Members have proposed directing the EAC to offer nonfinancial support for election administration, such as research into barriers to voting by individuals who are homeless and voluntary guidance about the use and risks of artificial intelligence in election administration.

How much energy does AI use? Can it be better?

  • Data Centers and Their Energy Consumption: FAQ

    Congressional Research Service · August 2025

    U.S. data center annual energy use in 2023 was approximately 176 terawatt-hours, around 4.4% of U.S. annual electricity consumption. Some projections show that data center energy consumption could double or triple by 2028, accounting for up to 12% of U.S. electricity use.
  • MLPerf Power: Benchmarking the Energy Efficiency of ML Systems

    MLCommons · February 2025

    MLPerf Power is a comprehensive benchmarking methodology for evaluating the energy efficiency of ML systems at power levels ranging from microwatts to megawatts, laying the foundation for sustainable AI.