❗️Everyone’s saying OpenAI got the “same deal” Anthropic was banned for. Read the fine print. They’re not the same:
On weapons:
Anthropic asked for “no fully autonomous weapons without human oversight” = a human involved in the decision.
OpenAI’s deal says “human responsibility for the use of force” = someone accountable, which can happen after the fact.
Oversight ≠ Responsibility. One requires a human before the trigger. The other requires a name on the paperwork after.
On surveillance:
Dario said explicitly: current law hasn’t caught up with AI. The government can already buy your movement data, browsing history, etc without a warrant. AI can assemble that into a complete picture of your life, at scale. That’s mass surveillance without breaking a single law.
Anthropic wanted protections beyond current law.
OpenAI’s deal says the Pentagon “reflects them in law and policy.” That’s existing law as the safeguard, the exact law Anthropic said is insufficient.
Same words. Different
On weapons:
Anthropic asked for “no fully autonomous weapons without human oversight” = a human involved in the decision.
OpenAI’s deal says “human responsibility for the use of force” = someone accountable, which can happen after the fact.
Oversight ≠ Responsibility. One requires a human before the trigger. The other requires a name on the paperwork after.
On surveillance:
Dario said explicitly: current law hasn’t caught up with AI. The government can already buy your movement data, browsing history, etc without a warrant. AI can assemble that into a complete picture of your life, at scale. That’s mass surveillance without breaking a single law.
Anthropic wanted protections beyond current law.
OpenAI’s deal says the Pentagon “reflects them in law and policy.” That’s existing law as the safeguard, the exact law Anthropic said is insufficient.
Same words. Different
❗️Everyone’s saying OpenAI got the “same deal” Anthropic was banned for. Read the fine print. They’re not the same:
On weapons:
Anthropic asked for “no fully autonomous weapons without human oversight” = a human involved in the decision.
OpenAI’s deal says “human responsibility for the use of force” = someone accountable, which can happen after the fact.
Oversight ≠ Responsibility. One requires a human before the trigger. The other requires a name on the paperwork after.
On surveillance:
Dario said explicitly: current law hasn’t caught up with AI. The government can already buy your movement data, browsing history, etc without a warrant. AI can assemble that into a complete picture of your life, at scale. That’s mass surveillance without breaking a single law.
Anthropic wanted protections beyond current law.
OpenAI’s deal says the Pentagon “reflects them in law and policy.” That’s existing law as the safeguard, the exact law Anthropic said is insufficient.
Same words. Different
0 Commentarii
·0 Distribuiri
·259 Views
·0 previzualizare