AI Regulation: What’s Next for the U.S.

AI can be transformative, but only if assessed with a standard worthy of the actual trained practitioners behind the solutions.

Tierney Adobe Stock 230441943
Tierney AdobeStock_230441943

Artificial intelligence has exploded into the world of architecture, engineering and construction (AEC), with more firms than ever investing in AI technologies, hoping to gain a competitive edge. But as the mad grab for AI bonafide accelerates, the industry must grab this opportunity to adopt universal standards for the good of the industry and the world we serve.

AI standards haven't necessarily been a high priority for private firms adopting or developing technology; less than half of U.S. companies are estimated to have policies governing the use of AI in the workplace, and AI experts are sounding the alarm about the lack of cohesive standards.

The U.S. government, meanwhile, has not issued any firm guidance; there have been few high-level directives handed down, and unlike the European Union (EU) or other governing bodies, the United States hasn’t passed an overarching AI law.

This new administration may look to propose firm guidelines or rules governing AI. The surprise emergence of DeepSeek has threatened the U.S.'s dominance in the AI race through OpenAI. With a newly competitive landscape, there may be an increased desire to see a level playing field set for AI development and deployment in the United States.

That hasn’t always been the case; it’s become cliche to say that AI is in its “Wild West” era. 

Businesses, including those in the AEC industry, have been moving quickly to adopt AI, amidst a prevailing fear of missing out. Not wanting to be left behind, they're adopting AI technologies without necessarily understanding their quality or behavior in all situations. 

The issue is that there's nothing out there to guide these businesses. Companies and even industry groups are taking a patchwork approach to assessing AI and AI readiness, meaning certain fields are more prepared than others to manage and assess AI systems — and unfortunately, the AEC industry isn’t one of them. 

Without a cohesive standard, even the most lackadaisical methodologies can be considered valid — creating unnecessary risks for the AEC industry and its customers. 

The AEC industry is struggling; assessments that have sought to understand the industry’s AI technological preparedness have done little beyond tabulating the total mentions of AI or other related technology phrases on a website or looking at simple survey answers. We need to bring our efforts to adopt standards to a higher level.

The AI risk

AI is often called a "black box," and with good reason. Many of the answers that AI spits out involve so many calculations that it's impossible to say precisely how it's reaching its answers. When AI is being used to write an email response to a client, that may be an acceptable risk to take; but when it’s providing a calculation or engineering estimate? It’s an entirely different risk assessment. Engineers — and the public — need to know precisely how those calculations correlate to reality.

In the United States, compliance with any AI standard is largely voluntary. While the EU has adopted a universal AI Act, which includes, among other things, an enforcement method to ensure compliance, U.S. companies—even those in as exacting a field as engineering—still have the option of opting into standards. 

While some national groups have created high-quality standards, these standards are far from a universal solution that can be adopted by the hundreds of firms that have already adopted AI. One recent example that provides insight into where the AEC industry is headed can be found in the US Food & Drug Administration’s Guidance on AI Enabled Devices. It provides comprehensive guidance on measuring AI performance and how those metrics are used in medical device marketing. The National Institute of Standards and Technology’s (NIST) AI risk management framework is also a good first step but offers just a benchmark for firms. On the other hand, the ISO's AI framework is arguably too in-depth, taking significant time and money to adopt and creating a barrier to entry for most day-to-day AI uses. 

As an industry, AEC needs to call for compliance and a quality standard. The compliance standard should help companies understand how they're tolerating and prioritizing risk, setting a framework around what makes AI trustworthy. It gives engineering firms a better idea about the reliability and validity of the answers they're getting from their AI system and can also ensure that AI systems—especially those used in engineering—are coming to answers in a transparent and accountable way. 

Without a compliance standard, industry firms face a slippery slope situation; data-driven models feed off data inputs that only provide self-affirming answers, with no guarantee that the AI model is actually relying on any of the physics principles that have been taught for hundreds of years. 

Right now, any company could say anything they'd like about the quality of their AI models, and there's no third party against whom that statement can be checked — something a quality standard can help defend against. A quality standard helps to ensure that an engineering product—whether civil, electrical, water resources, or environmental—isn't simply a statistical echo chamber but can be traceable to a traditional engineering model. 

The current administration has committed to a $500 billion investment in AI, and a set of national compliance standards should certainly be part of that. 

Science, engineering, and design firms have adapted to new technologies; even an industry-standard technology was once a novel solution that had to be tested and measured before it could be trusted. AI can be transformative, but only if assessed with a standard worthy of the actual trained practitioners behind the solutions. 

 

Latest