ChatGPT has kickstarted a bunch of discussion of how AI/ML will change the world. No argument there.
But reading some of it, I’m reminded of something that has bugged me in some of the discussions of AI “safety”. Take this bit, from Sam Hammond, via Marginal Revolution:
ordinary people will have more capabilities than a CIA agent does today. You’ll be able to listen in on a conversation in an apartment across the street using the sound vibrations off a chip bag. You’ll be able to replace your face and voice with those of someone else in real time, allowing anyone to socially engineer their way into anything.
A natural follow up to these concerns: Will these things be legal? If they’re not legal, will they be monitored? Will there be enforcement? Will there be strong norms against them?
(To its credit Hammond’s post does describe the interaction of law and AI. My issue is less with the post than the discussion surrounding ChatGPT.)
AI raises a ton of legitimately new and interesting questions about how we’ll interact. The “AI safety” conversation largely aims to deal with those questions technically. Can you design an AI that we know for sure won’t do X, Y, or Z? That’s good, important work: code is law and all that. Technical fixes are one major, legitimate way to regulate things.
But Lessig puts code (“architecture”) alongside markets, norms, and laws in his discussion of code-as-law.
And most of AI governance is about those other three. What will we incentivize? What will we stigmatize and valorize? What will be legal? Those are boring governance questions and most of what changes with AI will involve those things, not engineering different sorts of reward functions.
New AI capabilities are going to raise new questions of human governance at a time when we’re seemingly getting worse at governing ourselves. There’s no technical fix to that challenge.