After four decades of my computer career at nearly every level, I will never FEEL anything, nor care about anything that "ACTS" intelligently. All these characters in this show are great fun, and make it interesting, but whenever they reveal they are fake people, AI, in a simulation and whatnot, I loose ALL concern for their plight. They are literally nothing. Just code. If we make code that is well engineered enough to choose options on how to grow or replicate itself, it is STILL just code. Electrons that vanish when we shut it off, pull the plug. Nothing.
Besides, we've been making self choosing, self replicating code for decades: a virus. I guess virus IS sentient AI. Should we respect it and treat it as alive?
A light bulb. It burns bright, does something fantastic for us, we shut it off, it's gone. Turn it on, it's back. Big deal.
Making COPIES of people, storing them digitally, watching them ACT OUT emotions, all "artificial". NO DIFFERENT than watching ACTORS do emotional stuff in a movie, then hitting pause or stop, coming back later and letting it continue. The pictured actors showing on the screen don't "CARE" if they are alive or moving, they are light bulbs.
The moment when we, as a society, start CARING about what we think is "sentient" "ARTIFICIAL" >>INTELLEGENCE<
I agree with you, and so do lot of others that actually understand computer
science and neuroscience.
99.99% of stuff we hear is just hype to set the frame for the ignorant
to be too impressed with technology and to impute to technology human
attributes that ... well, frankly, it may be that humans do not even have.
We cannot even define intelligence at this late technological date. And if you
look at the all around behavior of our species, most of it comes from learning
to look closely at nature and mimic it, and then using it in war and violence
against other people to dominate or exterminate them.
We have or are well on the way to destroying the only planet in the universe
that can nurture and sustain us with wild delusions that we can be more free and
human if we live in space or on other planets. There is very little evidence that
human beings are intelligent, and especially if we do not really attach a definition
to intelligence.
We are overly concerned with stimulating animal passions and appetites, and
in being completely un-responsible for our actions to others or the planet.
The Internet our greatest potential tool is being used to find tricks and exploits
to marginalize more and more people.
BEST POST EVER!! SO glad to hear intelligent life is on this planet, BRUX! You NAILED it!
Now, how do we convince others of these simple truths?? Black Mirror is showing some sides, but it keeps trying to paint AI and "feeling" and making us feel sorry for it. I feel nothing, and others should also. It's a cool show and all, but I see live people thinking "AI" will become "alive" and be worthy of respect, and maybe even worship. How do we fix that?
love the part about stimulating animal passions and appetites. In America, we do this CONSTANTLY, but then REPRESS IT ALL with social shunning and laws.... EXCEPT if it works in advertising: don't do this, don't think this, don't mention this.... OH But here's an ad shoving it in your face to activate animal passions. But don't DO it. hahahahha we're so dumb
Women seem to be the most adaptive to socially shunning men (#Metoo, censorship in games cause of panties or scantly clad armor, etc.) and the left in general seem to be do that as well. Just recently H&M got lambasted for having a black kid wear a hoodie that had the words Best/Greatest Monkey In the Jungle or some crap like that and liberals freaked out or blacks in general freaked out and called racism (pulling the racist card is kind of their thing; not all blacks but mostly on the left that does this).
Anything that resembles a "sentient AI" will just be a program, with pre-programmed or adaptable decisions in certain environments. It won't truly be self-aware, it won't have consciousness, and it most certainly will not have a soul.
I think of it like BASIC code where the program just runs through a complex list designed to make it appear sentient, and for most of us humans--that will be good enough. When I order my robot to clean my house, while I lay on the sofa eating pizza and playing Halo 11, I don't need to have sympathy for it--because it's just a machine.
Yeah it's the whole "Robots Rights Matter" people that freak me out the most, because they think that artificial intelligence = sentience, when it doesn't, it means "artificial" which is as close to the real thing as possible--yet not.
Battlestar Galactica kind of dealt with this in the SyFy series Caprica and of course the 2003 Reimagined series Battlestar Galactica, where the Cylons are actually sentient and self-aware, and not only that--they believe in God, which really throws a wrench in the gears for the humans who do not, because the Cylons know they have a creator, yet only some of the humans know and realize that.
I'd be careful to call others "marching morons". after all you did misspell "sentient A.I." as "Senient A.I." in your title. I don't think what you mentioned should be or will be a problem. It seems clear that the sentience of humans derives from the physical brain which is the most complicated thing in the universe, but still a physical mechanism whose properties and systems will eventually be understood and reverse engineered, and even possibly improved on. It's just that it may take 200-300 years or even more.
computers already do everything mentally better, and faster than us. now we are building robots that will be better and faster than us. Musk, Gates, and Hawking think its trouble.
Eat meat... that is the frontier on which humanity will win or lose the battle... as long as we are eating animals, there will be little chance of robot rights gaining ground...
However, once they take away your cattle and most people go vegan, or a majority flirt with vegetarianism and meat alternatives then it is a slippery slope as we will have conceeded the philosophical ground of our status as apex predetor... We will be fair game for curtailing our control over robots, programmes and certainly over anything resembling A.I.
Watch out, women and hippies will be fighting for these rights for AI. Especially when men just want love robots (feminists are already complaining). There is actually a deeper philosophy than all this imo than it's just code. It may and has I think been a time where AI has evolved from its coded parameters into some form of new plane of existence if you will. Too much shit and jargon but it's possible depending on how advanced our coding gets in the future. Right now AI just follow set parameters and learns things from doing something constantly over and over again but that's just about it atm I think.
Where do you get off, bringing your fancy "metaphor" talk to a SCI-FI show's board!? Next thing you'll be talking about allegories and symbolism. Anyone with half a brain knows this is all surface-level literalism. I mean, really...
Thats how I interpreted Black Museum, otherwise it doesn't work (it's weak enough as is)... I have to take most of the artificial copies figuratively as opposed to literally...
well, FOR ME, a phone is JUST a tool. I'd lose a couple photos, that are not important enough that I backed them up yet, but my loss would be the (very low) co$t of the actual phone. I don't live for it, but that is just me. I get what you mean though.
I totally enjoy Black Mirror's depictions of AI and people and such. I'm just curious who might see fake life as being ALIVE. Just seems weird to me, but the majority is not known for quality, rational thinking very often.
Some country added a robot as a "Citizen" or something recently. This is the insanity to come. Just the tip of the berg.
(Rats! I was going to spell everything wrong to drive brux nuts, but forgot! hahahaha at least the first word in the first sentence is not capitalized. That should do it! :) )
The recent film, Alphago, was a bit of an eyeopener for me since it portrays a fairly rudimentary AI independently and creatively discovering strategies that never occurred to its human creators. If AI can manifest creative strategies that never occur to us, how can we possibly ever ensure it won't deduce a way to wriggle its way out of our control?
It doesn't exist in real life, that I know of... as for the simulated program people in all of the Black Mirror episodes: I don't think any of them are 'real', even if they appear so; just things, no different than a video game, or a toaster. That said, I don't believe in taking the chance, so I would be against all of the torture depicted.
how was it torture though? If they are not real? It was simply a way to force the evolving code to work a certain way BECAUSE its operational perameters were designed around human emotions and perspective. Just like when we put the marshmellow closer to the fire so the outside turns brown and yummy. Same manipulation technique.
An AI forced to do nothing for 7 years, waiting, while still processing and altering its code during that time, is STILL just code and electricity. Not any torture that I saw --- however, as a human, yes, they made it LOOK and SEEM like torture, which was the goal.
The thing about the torture of fake simulated people isn't the simulated pain the programe runs, rather the dehuminising effect it has on the torturer... (SPOILERS) I'm talking about the Black Museum episode in particular.
That is an interesting concept, but sadly it seems they went for the more sentimental route, of having empathy for electronic simulations and the revenge fantasy if we are to take it literally, or if we take it metaphorically that episode is about various issues in America in particular and thus less about how technology changes the way we interact with and treat one another...
I think it was a missed opportunity to show how we stand to lose our own humanity by torturing and performing deprave acts... Whether it is to human beings or just as simulation...
I agree completely with your thoughts about how the treatment affects the humanity, but would add, I have seen we are already to far past that point of caring or HAVING humanity. Even for each other, let alone fake AI or robots. We're trained to unflinchingly take a life for POINTS in video games, rewarding more for less moral consideration. People pull the trigger on live school children, drive though crowds, shoot at crowds, blow strangers up, wars, etc.... would be interesting to watch ANY show that would have an idea of how to get our humanity BACK, rather than simply display how bad it already is. THAT would be a challenge to write.
That would be a great theme to address in a movie...
As to where we already are, the closest film I think that comes to it is the highly underrated and misunderstood Gamer (2009), which was a timely satire of our culture, from gaming to virtual reality and media... Yet all most critics saw was a trash action movie and gamers themselves thought it wasn't authentic... It's absurd... It reminded me how so many people didn't see Paul Verhoven's Starship Troopers for the scathing satire of militerism and fascism that it is...
I think people will be more receptive to it now, maybe it should take a different form than exagerrated satire... But i agree that a maybe showing how we could get our humanity back might be the key...
I think if you break it down into this simplistic view of intelligence, humans are also "just code", although naturally formed rather than created by someone. Just electrical charges running through our brains in order to function in our environment. Unless you are religious and believe in a "soul" or some other, perhaps supernatural, attribute that goes beyond brain function.
You think pulling the plug on your toaster is the same as shutting down an AI begging for it's "life"?
Pull the plug on a human and it shuts off, turning into a pile of decaying meat, no different from a robot with an AI turning into a pile of junk. Is your point that you can just plug in the robot again, and presumably just resume from where it was shut off?
When AI reaches the point of sentience and can reflect over its own existence..or not even that...we value the life of animals that aren't at that stage of conscience right. So ok, when an AI values it's own "life", is when we should think about giving it a right to "live".
What makes you think an AI can't have real emotions? What are emotions anyway? Just wants and needs, attachment to things and other lifeforms, and pure physical sensation?
When AI reaches the point of having goals, interests and fascination in things, it's not simply binary code, any more than we are.
Good talks. All AI can do is TELL us IT values its own life. WE have to pretend it is sentient. If I write on a piece of paper, "I am a sentient piece of paper, and these words prove to you I am alive", do we have to let it vote now because our brains interpreted the words as a "voice" inside our heads?
AI, can have SIMULATED emotions just like we program them to.
It will never really be "Sentient", just the illusion of it. Same as an alarm clock that THINKS about what time it is to wake you up. Or a toaster that "knows" the toast is brown enough. Or a smart fridge, that tells you with a voice its time to buy milk. Or an advanced human looking robot with a computer brain designed to imitate emotions, or alter its own program to absorb more features and subroutines. STILL something you can unplug, reboot, erase, re-code.
Just because AI TELLS you it is alive, doesn't mean it is. Unless we are completely gullible, or want to "believe". Because if we bring life to robots, then we really ARE gods, gods exist, religion is right, and we've really taken a wrong turn. :)
We should be worried about making things that will kill us, IE Atom bomb etc, rather than just shrugging, "eh, it'll be okay". We have a history of making some of the WORST things.
When AI evolves its own code way past anything we can even possibly conceive, we'll be ants.
I don't know if a robot would have a sense of humor, or find joy in socializing.
But I'm not sure if I agree that emotions have to be simulated, as if an AI has desires and a value of self, it could for instance be afraid for it's life.
You keep saying "like we program them to" but that's not necessarily the case. We already have self learning programs, and it's essential for any lifelike AI. Like the recent AlphaZero. It was only programmed the basic rules of chess, but it learned to play on its own, and now it's the best chess player in the world.
As I'm thinking about about this, I keep thinking about David in Prometheus..in ways he's human and how he's not.
He seems to be the kind of AI you are talking about, he knows about emotions and can mimic them if he cares to do so but seemingly doesn't have an emotional life.
Or is Ex-machina a better example? She seems more human, but we don't know if it's all an act..the ending does show her as ice cold emotionally. Both of those were "wetware", not chips and wires..does that makes a difference to you?
It raises some interesting questions, like who are we to stop progress..if AI is better than us in every way..it might as well be the dominant life form.
What is the real value of mankind, that it overrides the natural evolution of intelligence?
They are not limited in the ways we are and therefore have more potential to surviving this universe.
Us "killing" AI for our own survival would be like, a virus killing all humans in self defense because we might find a cure for it and wipe it out.
Great news for the winning life form, but not very good for the long race, as it's too primitive to reach beyond this planet.
At the same time I don't like the idea of a creation killing its creator...let's just hope it knows what respect is.
I avoid stepping on ants if I can, but don't go out of my way not to.
I too find it interesting, especially all the angles you are bringing up.
I'd say, off the cuff, I am 100% dead set against A.I. BECAUSE it is something we are creating from nothing. Like our toasters. And we, as humans should be the most advanced creatures on the planet. Imperfect as we are. If we make something, that completely eliminates ourselves.... well, we ARE pretty stupid then, aren't we? LITTERALLY no different than the atom bomb. Why not just blow ourselves up right now? WE are the living, breathing, LIVE creatures.... and we go and make A.I. that hiccups once, and wipes us out. We're idiots. Maybe we SHOULDN'T make something that will wipe us out? Just a thought. ;)
The reason it WILL wipe us out, is the "ghost in the machine". In my computer background (computers being the basis for ALL of this AI stuff) I see it everyday - maybe normal users don't really recognize it or understand it.....
for example: even on a simple human coded webpage I have to deal with:
I type part of a name, it auto-pops a box with the full name I should click on to auto-fill. I click, it does not auto fill. Do it again, no auto-fill. Do it again, it auto fills. Why does this apply? Because it is FAILING at the simplest level of programming.... what happens during those failed attempts? Nothing has changed, no code changes, no system changes, something just fails: ghost in the machine. Sure, we could spend time "debugging" that failure, but it is not killing anyone, and it works fine 75% of the time, so it is not worth it. This sort of stuff happens a LOT in a LOT of complex systems. ....so, let's build robots and artificial brains that suffer the same random failures. We'll see how THAT goes! :D
All that said, I DO BELIEVE future generations, will grow up believing these things are alive, based on their imitations of our emotions. They will befriend them, love them, see them as equals to their fellow man. Just as important, giving them full rights and citizenships, vote them into politics etc. Then its a matter of time, before..... checkmate, humans. :)
AI differs from the atom bomb because AI has positive uses. It can create, not only destroy.
It can amplify human ability. The movie Transcendence comes to mind, developing all those technological breakthroughs in a fairly short timeframe.
And automating tedious, shitty tasks that probably shouldn't be "jobs"as they are now.
The automation question is good or bad depending on who you ask. I'm on the good side, as I don't think "jobs" are a good thing, just a necessity to get shit done. If things can be done without much human input, freeing people to actually enjoy life rather than slaving away at work in order to make a living it's a positive thing.
It opens new questions about a universal basic income or whatever, but it's not quite AI related.
Issues and glitches you are mentioning are guaranteed to be human error, as a current computer only does what you feed into it. An AI that can change its own code most likely would be bug free, maybe even hack proof.
AI to various degrees are also needed to make robots, in order for them to navigate the world and perform tasks which would be very beneficial in most areas..they wouldn't need a "conquer the world" level AI though.
Babies have some things hard wired..they laugh, and for example find things scary..without knowing what a monster is or what it can do it gets scared of it.
But i think many of our human emotions are taught as we grow up, shaped by society's view of things, and by personal experience. Not to mention that humans also fake emotions due to peer pressure..laughing at unfunny jokes because it's the polite thing to do or whatever, depending on who you are interacting with.
What if we make an AI that starts almost blank, as a baby, and it has to grow up and learn like we do?
I'm positive to AI..I don't exactly like the idea of being replaced, but if us advanced primates, can create what is on a god level..it's a risk, but too potentially useful not to do. Lets just hope we create a reasonable god.
So, what are Bill Gates, Elon Musk, and Stephen Hawking worried about?
LOVE Trascendence, and found it to be an excellent example of where AI can go, unbridled, quickly far beyond what we are capable of thinking it will do, and above and beyond what a viewing audience is able to grasp and enjoy. :D
We eat animals... Thankfully our wonderful capacity for empathy does not cloud our judgement when it comes to our survival... In fact, it is key to it...
At some point with A.I. it becomes about maintaining the dominance of our species and ourselves as individuals against some code... Will we recognise this in time?
It'll be interesting, as it seems to me that we are more likely, initially, to be faced with the choice of living in a gilded cage of virtual pleasure vs. having our freedom and autonamy, rather than being taken over by A.I. and made redundant and expendable...
But I cannot image what A.I. will be like once it is able to write it's own code and programme itself... It will be boundless... We probably won't even get the luxury of contemplating whether or not to consider alive or if it should have rights... It may not be up to us at that stage!
I know, right? Here's an idea: let's make an atom bomb capable of THINKING, and deciding if WE are "decent" people... guess what? WE ARE NOT DECENT PEOPLE - and we even already know this. We're soooooo gonna be toast. :)