In a development that has raised serious concerns among safety advocates and government staff alike, the U.S. Department of Transportation (DOT) is reportedly using Google’s Gemini artificial intelligence to draft safety regulations. The practice, described by some as “wildly irresponsible,” has sparked warnings from DOT staffers that the AI-generated rules could lead to injuries and deaths due to potential errors or lack of nuance in safety-critical contexts.
The AI Implementation
According to reporting by ProPublica, the Trump administration has directed the DOT to use AI for expediting the federal rulemaking process. The department has already used Gemini to draft at least one unpublished Federal Aviation Administration rule. Critics point out that this approach could fundamentally alter how safety regulations—governing everything from airplane safety to pipeline integrity—are developed.
The initiative appears to be part of a broader effort to accelerate regulatory processes, with reports suggesting that AI-drafted regulations could be implemented within as little as 30 days. This represents a significant departure from traditional regulatory processes that typically involve extensive review, public comment periods, and expert consultation.
Technical Implementation
While specific technical details about how Gemini is integrated into DOT’s regulatory processes remain limited, the use of generative AI models for drafting complex regulatory language represents a novel application of the technology in government functions. Gemini, like other large language models, is designed for text generation and creative tasks, but its application to safety-critical regulatory drafting raises unique challenges.
Staff Concerns and Criticisms
The use of AI in drafting safety regulations has met with significant internal resistance. At least six DOT staffers have expressed concerns about the practice, with some describing it as “wildly irresponsible.” These staffers worry that the AI might introduce errors or fail to adequately account for the nuances and complexities inherent in transportation safety regulations.
The concerns aren’t merely theoretical. Safety regulations are highly technical documents that must account for edge cases, specific scenarios, and complex interactions between different systems. AI models, despite their sophistication, can hallucinate facts, misunderstand context, or miss critical details that human experts would recognize.
Broader Pattern of Regulatory Changes
This AI implementation fits into a larger pattern identified by ProPublica, which reported that the Trump administration’s DOT has taken 30 regulatory actions that current and former agency officials consider at odds with the agency’s mission to protect the public. These include postponing rules requiring freight trains transporting hazardous materials to carry emergency oxygen masks for crews.
Implications for AI Governance
The DOT’s use of AI for safety regulation drafting raises fundamental questions about AI governance in government functions. As governments increasingly turn to AI tools for various services, establishing appropriate oversight and safety protocols becomes critical.
Several organizations have been working on frameworks for responsible AI use in government:
- The OECD AI Policy Observatory tracks government use of AI and promotes responsible practices
- UNESCO has developed recommendations on the ethics of AI that could guide governmental AI usage
- The EU’s AI Act establishes governance rules for AI systems, including those in regulated products
The challenge with using AI for safety-critical functions is that errors can have immediate and severe consequences. Unlike AI applications in customer service or content generation, mistakes in safety regulations can directly impact public welfare.
Expert Perspectives
AI safety researchers have expressed concern about governmental use of AI in regulatory processes. While AI can certainly assist in drafting and organizing regulatory language, completely delegating the initial drafting of safety-critical regulations to AI raises significant questions about accountability and accuracy.
Some experts emphasize that AI systems still struggle with understanding context, particularly in specialized domains like transportation safety where regulations must account for specific engineering constraints, environmental factors, and human behavior patterns.
Public Safety Considerations
The primary concern raised by DOT staffers and outside observers is that AI-drafted regulations might contain errors that could compromise public safety. Transportation safety regulations govern everything from aircraft maintenance schedules to pipeline inspection requirements—areas where even small errors can have catastrophic consequences.
Traditional regulation drafting involves teams of subject matter experts who understand not just the technical requirements but also the real-world implications of different regulatory approaches. The concern is that AI systems, no matter how advanced, may lack this deep contextual understanding.
Comparison to Traditional Processes
Traditionally, federal regulations undergo a rigorous process that includes:
- Initial drafting by subject matter experts
- Interagency review
- Public comment periods
- Final review and approval by authorized officials
- Potential judicial review
While AI could potentially assist in various stages of this process—helping to organize comments, draft initial versions, or identify inconsistencies—the complete delegation of drafting authority to an AI system represents a significant departure from established practices.
Moving Forward
The controversy surrounding DOT’s use of AI for safety regulation drafting highlights the urgent need for clear guidelines on governmental AI usage. As AI becomes more capable and ubiquitous, governments must balance the benefits of increased efficiency with the risks of reduced human oversight in critical functions.
Several key questions need to be addressed:
- What level of human oversight should be required for AI-assisted regulation drafting?
- How can governments ensure that AI-generated regulations maintain the same quality and safety standards as human-drafted ones?
- What accountability mechanisms should be in place when AI systems contribute to regulatory errors?
- How can public participation in the regulatory process be maintained when AI is involved in drafting?
The DOT’s initiative, while perhaps well-intentioned in its goals of speeding up regulation, serves as a cautionary tale about the challenges of implementing powerful AI systems in safety-critical government functions. As governments continue to explore AI applications, the need for robust governance frameworks becomes increasingly urgent.
In the meantime, the concerns raised by DOT staffers underscore the importance of maintaining human expertise and judgment in areas where public safety is at stake. The intersection of AI governance, government policy, and public safety will likely remain a contentious and important topic as these technologies continue to evolve.
Sources
The information in this article was gathered from multiple sources:
- Ars Technica: “Wildly irresponsible”: DOT’s use of AI to draft safety rules sparks concerns
- ProPublica: How Trump’s DOT Is Loosening Safety Rules
- Slashdot: DOT Plans To Use Google Gemini AI To Write Regulations
- OECD AI Policy Observatory
- UNESCO: Ethics of Artificial Intelligence
- EU AI Act Regulatory Framework

Leave a Reply