I know we’re all currently worrying about tech but the development of the atomic bomb - and particularly the increasing desperation of the physicists who were trying to stop it - is such a powerful fable about all of this. And I still don’t know what the answer is, in terms of the human urge to develop new capacities via tech. Can that really be stopped? Once a potential has become evident to more than one person, isn’t the development pretty much guaranteed (so long as there’s some utility or profit at the end of it)?
No, I don't know what the answer is either! Though there's maybe an interesting link film-wise across to "Oppenheimer" and Nolan's insistence on making an actual explosion there too rather than relying on CGI?
I’ve often thought that about T2 effects, and why the constraints (and knowledge of those constraints) made for much better work than later ‘let’s just hammer this with CG’ approaches. Reminds me as well of that Michael Lewis story about the best coders in finance being the ones who grew up with limited computer access, so could only run punchcards once.
Interesting, I'm not sure I'd heard that one before, but it makes sense. It reminds me a bit of how I ended up doing simple COVID graphs because I didn't know how to do fancy ones (I never managed to install ggplot on the crappy laptop I had at the time) - but actually that worked out fine because I lucked into a situation where what I was doing ended up looking fine on phones etc
As mentioned, there's a circular plot hole ( so to speak) in T2, in that skynet development results directly from reverse engineering of the original terminator hardware, which could not have existed without said development. They never really clarified that little detail.
"We could all be Miles Dyson," yes, but at this point, we can see who is. Machine learning engineers are defense contractors, whether they acknowledge it or not, and something like Skynet doesn’t have to emerge from malice—just from incentives that reward short-term gains over long-term consequences. OpenAI’s nationalist economic blueprint is already laying the groundwork. AI supremacy is framed as a strategic imperative, and every major player is racing to consolidate power before anyone else does.
The pieces are all there. Automated decision-making systems already control financial markets, infrastructure, and military logistics. The shift from Software-as-a-Service to Employee-as-a-Service will gut industries before governments even begin to react. The endgame isn’t just replacing human labor—it’s centralizing control over who gets to deploy intelligence at scale.
The only reason people aren’t fully processing this yet is Zeerust—the lingering aesthetic bias that makes "AI apocalypse" feel like a retro sci-fi plot instead of an economic and geopolitical reality unfolding in real time. But there’s no killer robot army marching down the street. Just a quiet restructuring of power, happening in boardrooms, research labs, and server farms. And by the time most people realize what’s happened, it won’t be theoretical anymore.
I know we’re all currently worrying about tech but the development of the atomic bomb - and particularly the increasing desperation of the physicists who were trying to stop it - is such a powerful fable about all of this. And I still don’t know what the answer is, in terms of the human urge to develop new capacities via tech. Can that really be stopped? Once a potential has become evident to more than one person, isn’t the development pretty much guaranteed (so long as there’s some utility or profit at the end of it)?
No, I don't know what the answer is either! Though there's maybe an interesting link film-wise across to "Oppenheimer" and Nolan's insistence on making an actual explosion there too rather than relying on CGI?
I’ve often thought that about T2 effects, and why the constraints (and knowledge of those constraints) made for much better work than later ‘let’s just hammer this with CG’ approaches. Reminds me as well of that Michael Lewis story about the best coders in finance being the ones who grew up with limited computer access, so could only run punchcards once.
Interesting, I'm not sure I'd heard that one before, but it makes sense. It reminds me a bit of how I ended up doing simple COVID graphs because I didn't know how to do fancy ones (I never managed to install ggplot on the crappy laptop I had at the time) - but actually that worked out fine because I lucked into a situation where what I was doing ended up looking fine on phones etc
It was from here I think: https://www.vanityfair.com/news/2013/09/michael-lewis-goldman-sachs-programmer
As mentioned, there's a circular plot hole ( so to speak) in T2, in that skynet development results directly from reverse engineering of the original terminator hardware, which could not have existed without said development. They never really clarified that little detail.
Genuine question - how does one deplete a nuclear warhead stockpile?
Very very carefully I think (and not in the obvious way). I think the more interesting question is "how do you know the other side is sticking to the terms of what they agreed to" and there's some good stuff about that here https://www.ucs.org/sites/default/files/legacy/assets/documents/nwgs/inspection-fact-sheet-1.pdf
Will take a look, thanks.
"We could all be Miles Dyson," yes, but at this point, we can see who is. Machine learning engineers are defense contractors, whether they acknowledge it or not, and something like Skynet doesn’t have to emerge from malice—just from incentives that reward short-term gains over long-term consequences. OpenAI’s nationalist economic blueprint is already laying the groundwork. AI supremacy is framed as a strategic imperative, and every major player is racing to consolidate power before anyone else does.
The pieces are all there. Automated decision-making systems already control financial markets, infrastructure, and military logistics. The shift from Software-as-a-Service to Employee-as-a-Service will gut industries before governments even begin to react. The endgame isn’t just replacing human labor—it’s centralizing control over who gets to deploy intelligence at scale.
The only reason people aren’t fully processing this yet is Zeerust—the lingering aesthetic bias that makes "AI apocalypse" feel like a retro sci-fi plot instead of an economic and geopolitical reality unfolding in real time. But there’s no killer robot army marching down the street. Just a quiet restructuring of power, happening in boardrooms, research labs, and server farms. And by the time most people realize what’s happened, it won’t be theoretical anymore.
I don't think it's as simple as that, but OK