top of page

White House Considers New Reporting Rules for Cloud Computing Companies

The Biden Administration may require tech giants to report if a customer rents a certain amount of computing power.

The Whitehouse wants companies to report the amount of computing power they lease out.
Creator: VideoFlow Credit: Shutterstock Copyright: Copyright (c) 2022 VideoFlow/Shutterstock

WASHINGTON D.C. – The White House is considering a new approach to enhance national security by compelling cloud computing companies to report instances of excessive power usage by their customers.


This initiative is part of an upcoming executive order on artificial intelligence, aimed at identifying potential AI threats early, particularly those originating from foreign entities. While the order is still under consideration and subject to change, it signifies a significant step toward regarding computing power as a national resource.


The New Cloud Computing Ruling


The proposed policy would require companies like Microsoft, Google, and Amazon to disclose when a customer purchases computing resources beyond a specific threshold, similar to "know-your-customer" policies in the banking sector that prevent money laundering and illicit activities.


The objective is to provide American authorities with early warnings about potentially harmful AI activities, such as the development of powerful language models or other AI technologies by foreign entities.


Proponents of the policy, including Microsoft, OpenAI, and the RAND Corporation, contend that it is essential for preventing malicious actors from exploiting AI for harmful purposes. They suggest that a clear framework is needed to determine who should be responsible for collecting and maintaining customer data.


The Challenges for Cloud Computing Companies


However, implementing this policy poses certain challenges. The rapidly evolving nature of AI technology means that by the time reporting thresholds are established, they may already be outdated. Additionally, determining the appropriateness of computing usage as a cause for concern could require cloud companies to closely monitor their customers, potentially creating conflicts of interest.


Critics argue that this approach may focus too narrowly on large language models like ChatGPT and overlook other AI tools with lower computational requirements, such as facial recognition algorithms, which have also raised concerns about their misuse.


Comments


bottom of page