The cyborgs' HUD


So, the cyborgs have a HUD system where words and stuff are shown.

Am I being really dumb or is this not very strange? Do they have a camera inside their eyes that looks out of the eyes and sees the words on the HUD screen?

--------
DISCLAIMER: I am not a retard, but sometimes I am.

reply

They show in the hotel bathroom scene that the human looking eye is really just a cover for the electronic one.

reply

that's not what I mean.

This is the HUD: http://i.dailymail.co.uk/i/pix/2013/11/13/article-2505561-1963842500000578-689_634x286.jpg

Why are the words there? the cyborg is the machine, he doesn't need a visual representation of information about the objects he perceives. It's not like Iron Man who has the Jarvis HUD in his suit. Comparing to that, T-1000 is the suit, not Iron Man.

--------
DISCLAIMER: If the post above is stupid, explain the stupidity instead of name-calling.

reply

It's just for the audience view. In real life a machine like that would see anything in a bunch of computer code. Basically what you would see in the matrix. It would be pretty boring though if they did that.

reply

The non-realistic aspect of it bothers me.

--------
DISCLAIMER: If the post above is stupid, explain the stupidity instead of name-calling.

reply

It's just for the audience view. In real life a machine like that would see anything in a bunch of computer code. Basically what you would see in the matrix. It would be pretty boring though if they did that.


but why is this so? maybe the cyborgs' sensory organs are a multi-task compartment. maybe Skynet gave him visual field while simultaneously had matrix-codes / computations going on simultaneously but he keeps that sub-browser 'minimized'.

a thoughtless machine can still have eyes to see things. the same vein a dumb insect with the neuronal capacity of less than a peanut can still perceive his surroundings.

his 'human' POV is just a means of anthropomorphizing in order to grasp a detailed understanding of the world. if he just saw everything in matrix-code, he'd have less tactical prowess on using logical deduction to his environment. there'd be no vibrancy in his computational prowess.

_______________________
PDBPO LEADER 

reply

[deleted]

It's possible I guess. I just don't understand how it could have it's own line of sight through the human eyes. We see everything because light bounces off our retinas to form images. How could that be accomplished for a robot's eyes

reply

I'm sure Skynet found a way...it did, after all, have hundreds of millions of test subjects it could experiment on to perfect the process...Even after 3 billion human lives were 'ended' on Judgement Day.





Why can't you wretched prey creatures understand that the Universe doesn't owe you anything!?

reply

We see everything because light bounces off our retinas to form images. How could that be accomplished for a robot's eyes

as far as you've described it's identical. light bounces onto a light-sensitive panel and is turned into an electrical signal that is sent to it's 'brain' (CPU).

all that is different is the way it's processed. We don't know how the brain processes sensory data, but all the computer stuff i've seen, usually from car autopilot features, scans for shapes it can identify.

http://i.dailymail.co.uk/i/pix/2013/03/05/article-2288472-1873F1B4000005DC-820_634x356.jpg


what does the world look like to a robot? probably like it does from a laser scanner
http://www.psomas.com/wp-content/uploads/2014/08/mobilelaserscanner-road-news.jpg


"He's dusted, busted and disgusted, but he's ok"

reply

"In real life a machine like that would see anything in a bunch of computer code. Basically what you would see in the matrix. "

Definitely not.

In 'real life' (and the 'LIFE' part is questionable here), computers wouldn't, and do not 'see' things.

Do you think a CPU visually sees 'the code', and furthermore, do you think it looks like 'The Matrix'??

Computers, CPUs, GPUs and other chips do not 'see' anything. They do not have eyes. They do not have VISUAL systems to process information. They are simply full of fluctuating, electronic dichotomy, and that's ALL. There's ABSOLUTELY NO SIGHT OF ANY KIND INVOLVED, it wouldn't look like ANYTHING.

If you really wanted to FORCE a 'visual' into how a computer would 'see things', then it still wouldn't be 'a bunch of computer code' (not that 'computer code' exists, it's just 'code', 'programming language' or 'CPU code' at best - computers don't have a specific 'code', but even CPUs really don't, when you get down to it)..

It would be just zeroes and ones. That's it. That's all a CPU is and does - it 'endlessly' changes zeroes to ones and ones to zeroes, and sometimes keeps a zero or one from changing. That's ALL a CPU ever really does.

However, a robot like this DOES have 'vision', and it needs to be able to INTERPRET that vision to be able to function in the three-dimensional world (the HUD should be in 3D, really, because that's how human vision works to let us see in three dimensions, surely the machines would utilize the two eyes to mimic this, because it's EXTREMELY useful when functioning this kind of a world - therefore, the HUD should be able to interpret three-dimensional environment in actual three dimensions, not just be a flat 2D visual representation)..

Here is where a HUD can be useful - if you look at what Tesla's cameras do, although much of it IS aimed for human eyes, it still does plot boxes on things and track the movement of those things and so on.

reply

It's therefore possible, that when interpreting enormous amounts of constantly-flowing visual data, making this kind of 'tracking boxes' to mix with the 'organic torrent of visuals' helps program the 'visual data interpretation algorhithms' to be more efficient - if you can put a box or selection on some object and then lock and track it visually, the CPU can interpret the mixture of the 'organic visuals and computer-generated overlay that functions together' more efficiently and easily, than if it had to just do it all without any visuals.

From _THIS_ perspective (no pun intended), a HUD that the CPU sees is FEASIBLE, even if not 100% necessary - from a programming point of view, this kind of visual might help quite a lot, and of course testing the tracking and stuff in the factory is VASTLY easier if there's some kind of visual to interpret things more directly than having to go through thousands of lines of code just to identify a screwdriver.

You can think of this as how 'A.I.' actually learns in our modern real world - it doesn't just generate code, it actually visually 'looks at' things and then makes interpretations. When an A.I. plays a game, it does it visually, and it learns from its mistakes. The HUD makes sense from this kind of point of view, where A.I. is trying to be more like a human, so it even LEARNS more like a human, which means it learns VISUALLY and AURALLY, like a human would, which does necessitate a HUD to help with all this.

Technically, a future-tech robot might not have needed a HUD, but for all the purposes I stated, it COULD be useful and of course no robot is an island, there are tech robots and factories that need the DIAGNOSTICS if something goes wrong or for upgrades and maintenance, intelligence gathering and such, and it's vastly more efficient to have a visual representation of EVERYTHING important, all the information on-screen for this kind of a purpose, especially since an A.I. CAN learn, look things in a 'human way'.

reply

As a sidenote, I always love a good HUD - this movie wasted an opportunity to show a useful, functional, logical HUD that a robot like that would most likely or at least probably have.

All it shows is 8-bit CPU machine code together with some useless graphs and such that have nothing to do with anything. The 'F you, a-hole'-menu is the one of the only relatively useful thing, but it shouldn't be so clunky anyway.

Why are there no cursors or menus, why doesn't it scan things properly with that HUD? It would be so cool to show it looking through walls with infrared and scan people and objects to determine their importance - the other useful thing I can think of is the 'truck gear system' scene, but even that could have scanned the ACTUAL truck it's in, and figured out its mechanism by learning instead of a database matching.

Oh well, some day we will see a good HUD in some movie, where a robot actually scans everything informationally... that has been my dream for many decades now.

reply