At the June 10 session of the Regulations and Procedures Technical Advisory Committee (RPTAC), Microsoft’s Global Trade legal team urged the Department of Commerce to adopt a more flexible and scalable approach to regulating exports of advanced artificial intelligence (AI) infrastructure and services. Principal Corporate Counsel Catherine Moy and Senior Attorney Jesse Horne laid out Microsoft’s position on the rescinded AI diffusion rule and made recommendations for future regulatory frameworks.
Moy emphasized Microsoft’s support for the national security goals underpinning the Department of Commerce’s AI export control efforts, particularly the need to prevent adversarial access to U.S.-origin advanced computing technologies. However, the company expressed serious concerns about several elements of the AI diffusion rule, particularly its global tiering structure and quantitative caps on GPU exports.
“We are aligned with the goal of restricting adversaries’ access to advanced AI capabilities,” said Moy, “but we strongly opposed the tiered country structure and numerical export limits, which undermined cooperation with trusted U.S. allies and created operational gridlock.”
The tiered structure, which assigned most of the world to Tier 3 status, imposed sweeping restrictions even on partner nations, Microsoft argued. Moy said the company supported the Trump administration’s subsequent rescission of the rule and called for any future framework to avoid rigid geographic segmentation.
One of Microsoft’s principal recommendations was to expand the Validated End-User (VEU) framework—originally developed for trusted Chinese end-users but adapted in 2024 for data centers—as an alternative to case-by-case licensing for AI infrastructure exports.
“Shipment-by-shipment licensing is not scalable,” said Horne. “Hyperscalers like Microsoft need a compliance model that allows for trusted, repeat exports to certified secure facilities, especially given the global footprint of our AI operations.”
Microsoft noted that recent licensing delays have severely constrained global AI service delivery. The company warned that if future regulations require broad licensing absent a VEU-style exception, the U.S. risks undercutting its own AI industry’s competitiveness.
Horne criticized the AI diffusion rule’s auditing and certification requirements, especially the proposed application of FedRAMP or SCIF-equivalent controls to non-classified overseas facilities.
“We recommend that BIS align export eligibility with international standards like ISO and NIST,” he said. “Requiring perpetual audits or U.S.-only certifications is not feasible outside the United States and undermines global interoperability.”
The team recommended a standardized, self-attesting model backed by existing security frameworks, rather than requiring extensive third-party verification or costly physical infrastructure upgrades.
While maintaining that physical security requirements should be consistent across use cases, Horne suggested that logical controls—such as model access, data tracking, and customer due diligence—could vary between inference and training.
“Not every user is building a foundational model,” Horne said. “We need a more nuanced approach that reflects the diversity of AI applications and infrastructure operators globally.”
Microsoft’s remarks echoed broader industry frustration with the Commerce Department’s initial AI diffusion rule, which was published in October 2023 and rescinded earlier this year following widespread criticism from U.S. technology companies and allies abroad.
The company reiterated its public support for advancing U.S. leadership in AI while ensuring exports do not compromise national security. “Winning the AI diffusion race,” Loy said, “requires not just safeguards, but scalable frameworks that support global deployment of secure, U.S.-origin AI systems.”
Comments
No comments on this item Please log in to comment by clicking here