Artificial Intelligence Poses Tough Questions for Regulators
As artificial intelligence continues to rapidly advance, governments and policymakers are grappling with how to ensure powerful AI systems are developed safely. The question of when an AI model becomes potentially hazardous is complex, with computation thresholds now the primary method for making such assessments.
Current regulations in the United States and Europe rely on measuring AI systems in “flops” or floating-point operations per second. A level of 10^26 flops is the benchmark set under the Biden administration for stricter oversight requirements. California's new AI legislation mirrors this, also mandating additional reporting if a model costs over $100 million to create.
While such metrics provide a starting framework, critics argue they are an imperfect solution. Computing power alone does not necessarily correlate to risk. As AI techniques evolve, the types of systems triggering heightened scrutiny may change. Researchers continue debating alternative approaches to evaluating generative models.
For now, flops remain the best available proxy for capability. At extremely high levels of parallel processing, AI could theoretically enable unprecedented advances or unintended harms. Regulators intend thresholds to be revisited over time as the field progresses. Some experts view the debate as reflecting broader uncertainties around ensuring super-intelligent systems are developed and applied conscientiously.
Solutions will require ongoing discussion between technologists, safety researchers, policymakers and the public. As AI's impacts proliferate across society, setting principles to guide its stewardship prudently will be vital in realizing its promise, while guarding against potential pitfalls. Continual re-examination of methods as understanding improves seems a wise path forward.