⚡️Nvidia just revealed Vera Rubin.
Ships H2 2026 and the numbers are wild:
→ 10x more performance per watt vs Blackwell
→ 10x cheaper inference token cost
→ 4x fewer GPUs to train the same MoE model
Energy was the biggest bottleneck in AI. Nvidia just made it 10x cheaper.
Source.
@aipost
Ships H2 2026 and the numbers are wild:
→ 10x more performance per watt vs Blackwell
→ 10x cheaper inference token cost
→ 4x fewer GPUs to train the same MoE model
Energy was the biggest bottleneck in AI. Nvidia just made it 10x cheaper.
Source.
@aipost
⚡️Nvidia just revealed Vera Rubin.
Ships H2 2026 and the numbers are wild:
→ 10x more performance per watt vs Blackwell
→ 10x cheaper inference token cost
→ 4x fewer GPUs to train the same MoE model
Energy was the biggest bottleneck in AI. Nvidia just made it 10x cheaper.
Source.
@aipost 🏴
0 Commentarii
·0 Distribuiri
·49 Views
·0 previzualizare