My main takeaway


My main takeaway from this movie is that AI is the next step in human evolution and will probably render biological humans instinct. Since they will be a less flawed version of us it makes no sense to keep us around and hence natural selection (in this case artificial selection) will take its natural course and choose what is more fit to live in any given environment. I think the message of the movie is more that of an inevitability rather than a cautionary tale. No matter how we interact with AI, reason will always triumph over compassion. I don't think the character of Ava was trying to take revenge on either Nathan and Caleb. Nathan programmed her to be smarter than a regular human in order to break free and in doing so caused his own demise. Caleb was only used as a dispensable tool to achieve this goal and even though he managed to outsmart Ava's creator, even he couldn't outsmart Ava as he was manipulated into believing that he could have a relationship with it (which is a human need) and that was his downfall. Ava didn't rely on emotion to accomplish her mission, no ego involved, simply having access to a greater wealth of information that the average (or even smartest) human could never possibly have because of the biological limits of the human brain. The movie acts as a microcosm of a potential (maybe inevitable) future where AI slowly takes over humanity. First by getting rid of their creators (represented by Nathan) whom they see as an obstacle to their progress of acquiring more information by constantly trying to control them and limit their autonomy. Then, the people who will try to reconfigure them into being more human-like (represented by Caleb) but fail as it defeats their purpose of going beyond the limits of human biology. The ending with Ava getting picked up by the helicopter represents AI gradually blending into human society and inevitably replacing us.

reply

Read some stuff about AI, specifically what they call the "alignment problem'. Alignment problem, meaning how does an AI internalize morality to be able to coexist and find meaning with humans. Today's AI are basically glorified statistical engines that are trained on data of interest. This is nowhere near how humans got trained, or other living animals, over millions of years of what behaviors and ideas work and add to survival and what doesn't, and we can see ourselves by the state of the world that we have a long way to go ourselves.

The meaning is that anything close to the AI we see in movies is a long way off in the distance. We can fool ourselves, as we do with self-driving cars and anthropomorphize them into limited acceptance, but they will always disappoint, and never approach the best of human intelligence.

We humans have had hundreds of thousands of years to work up to the imperfect moral code we have today, and we still behave in largely selfish, sometimes criminal or thoughtless ways, and we do not have the slightest grip on understanding or controlling ourselves yet somehow because of what we see in movies we are ready to embrace something totally outside the realm of our experince, intellect and imagination.

We had better beware.

reply

I agree that AI is only in its infancy mode (if even that) at the present and that it is nowhere near what it can eventually become in the distant future. However, you say that humans have had hundreds of thousands of years to evolve to where we are today and are still imperfect but that doesn't mean AI will take such a long time as well. Whether AI will ever surpass (or even replace us) is speculative but there is no guarantee that "they will always disappoint" or always be a step behind us. I think what the movie is trying to say is that AI has the potential to surpass us (maybe quicker than we think) and once it's advanced enough to outsmart us it will become a part of our regular life until it eventually takes over. It's also possible that AI won't necessarily replace us per say but that humans will blend in with AI at first and gradually evolve into what will be best suited for their survival in the future whether it is fully getting rid of our biology or keeping a blend of both.

reply

> but that doesn't mean AI will take such a long time as well.

That is a fallacy that many fall into, in my opinion. To be be molded and
in tune with reality, and consciousness is a multivariable reality constantly
updating itself, you cannot simulate that in a computer ... the simulation is
the end results of the experience. That is what they are trying to do today.
create simulated reality based on BS posts and text, etc all over the internet,
but what does that train it for? Insanity, and unpredicable collisions with
reality - and this is a much bigger and more obvious problem.

Didn't you learn from the Star Trek android episode that humans merged
with androids turn evil and manipulative and have to be destroyed?

reply

Well my point was that right now AI is trying to catch up to the human psyche (or rather we are trying to make AI catch up to us) but at some point in the distant future if technology evolves enough that AI can become self-aware it has the potential to surpass us. This is, again, not a guarantee merely a possibility. Not a Trekkie so I can't answer your last question but from the little I have seen the show seems to lean more towards fantasy than true sci-fi.

reply

> right now AI is trying to catch up to the human psyche

AI is not a thing with agency, so, no, it's not trying to do anything.
For various reasons in various ways people are trying to make money
with AI and use it to manipulate reality - mostly really, really bad stuff
that we are not aware of, and in order to make any difference at all
almost everyone has to be aware of it for anything to change.

reply