2012年7月14日 星期六

Heads Up Displays and Computer Human Interface

Heads Up Displays For High Tech Human Interface Need More Sensory Input to the Brain. We have been studying the HUD – Heads Up Display units for the Apache Attack Helicopter, F-18, Business Jets and thought of it’s uses for soccer moms in SUVs in the fog, over the road trucks, Race Cars, Speed boats, Heavy Equipment is crucial mining endeavors.



and other possible uses of the technology, I have come to the conclusion that fatigue and concentration can be the weak link. Realize that the human brain is not through nature or nurture experienced enough to continue the loop and integrity of operations. Thus a HUD unit can cause an accident rather than prevent one. For instance a truck driver who is fatigued and there have been many studies on this loses concentration levels. The USN and Naval Research Division has done studies which indicate that even minor fatigue can cause a decrease in 10% concentration levels. When working with a HUD unit the brain is super-taxed in that it is doing tasks it is not designed to do and does not have the second nature experience to deal with.

Martial Artists study until certain moves are nothing more than reflex. Practice makes perfect scenarios. Batters in baseball use hand and eye coordination to a super human level, any baseball team captain can vouch for this. Even Vince Lombarde the famous football coach studying human nature and will power would agree. It is not that Humans cannot interface with computers, much is a blind faith trust issue and much is simply a matter of practice. Thus simulation is the key here.

If you look at the HUD theory and step back for a second you can see the problems with Visual Sensory Overload. The Brain is said to use 40% of its usage in the visual assimilation of input to data to action. Now then that leaves 60%. So once you hit visual overload you blank out, or brain fart as people often say. Anyone who has even been in sports and choked when up to bat or simply misplaced a word can attest to the fact that the brain has glitches when doing unfamiliar tasks or even recalling information which has not been committed to memory (see this site for increased memorization and thoughts on that subject) or used in a while. Thus again practice.

It is not that the Brain is not capable of this, it is capable of almost anything as we are learning, at least some are. The issues with interface are what is causing the problems. For instance in one study Apache Attack Helicopters in simulators and a few in real life crashed due to the night vision and HUD display visual data overload on the pilot. Why? First off a night everything is green or red depending on the system and 60% of the visual advantage of daylight is not present. Therefore you are working with 40% of the needed visual input, trusting a computer over innate characteristics like instinct and asking your overloaded brain to interpret data and act accordingly, AH HA, what is accordingly? Certainly that requires experience. But what experience do you have except perhaps checking out the girl on the freeway next to you, reading a newspaper, talking on your cell, shifting with one hand and watching the brake lights ahead of you? Well that is a good start, but a bit risky considering the cost of insurance? And of course there are always natural incursions such as AFLACK !!! DUCK!?!

Now then it was determined that those flying F-18s spent an additional 2.5 seconds to recognize runway incursions when using heads up displays. Not good? Why? Affixation on the target. What target? Well when coming into land the Threshold for instance and not down the runway, where you are GOING? And the old adage, never fly to an airport that your mind has not gotten to ten minutes ahead of time. In other words, your mind is the thinking mechanism and as inferior as it maybe, it actually may not be at all once you train it and use it properly. After all low and slow can make you dead Ted.

In Vietnam affixation on the target was causing the 180 knot, 200 in a dive Huey Cobras to take out the target using themselves, running into the target accidentally, don’t tell the Taliban about that one or the Japanese for that matter. This is a problem also impaired drunk driver often align their vehicles with parked cars assuming they are moving and simply deciding to follow them then smack into them and total their cars.

To test your visual input capabilities one of the great places to go is where businesses compete for your eye balls. Try Las Vegas Strip for instance each sign brighter than the next. Drive down the street at night and see how long you can continue to take in the visual input without everything all seeming the same. You would have to go to that street several times before you can do that. Also in San Fernando Valley on Reseda Blvd or on Ventura Blvd very little sign ordinances, they finally made the guy who owned a car wash take down his sing, because he put a 1960 Corvette 40 feet in the air on a sign and painted it pink to out do the competitors for your eye ball. In these areas; everywhere there could be a sign there is one. Miami, parts of it are also a kin. Another way to get sensory visual over load is go to a trade show and simply walk a little faster than normal and try to take in everything. Try it at a museum of an interesting subject matter, try to go through it as if there is a test later and take in as much as you can as fast as you can do it. You will find the same problems of visual sensory input. I would suggest that the best way to train pilots, drivers, operators using HUD; is to practice in relatively fast paced visual input of things that are more natural first. A combination of video games


which shock you when you make a mistake, you are involved and treat it as real and fear of penalty is felt. Training of the brain to accept and interpret and act upon multiple faster and faster incoming visual input is needed to save us money in multi-million dollar pieces of equipment and dead operators from what would be though of a pilot error but in reality is inevitable without my next idea. I believe that multiple sensory input is a better use of brain power than only visual. Actually a modified virtual reality visual is best suited, in that you visual interpretation is modified, you are not seeing what you are seeing instead you are seeing the objects differently than you normally would, therefore you eyes are not attempting to fill in the blanks, because there are no blanks to fill in. A person on the ground is a yellow stick. A Tank in a Orange box. Another aircraft is a blue object with a number on it for the type. Thus your brain is not trying to focus and see what it cannot or what takes up too much crucial time or takes up capacity of visual Input limitations on your current abilities.

I also believe that smells, touch, taste, sound are all also very good ideas for such HUD displays as to not overload your senses and keep your mind active and involved. For instance an incoming SAM might send a shock into your finger and you realize immediately that evasive maneuvers are needed ASAP. Also if you look at commercial airliners when you get too close to a mountain or the ground when the aircraft is clean a sound will alarm; “Terrain, Terrain, pull up, pull up.” Similarly the new Bus Jets that Gulfstream puts out have sounds when Shoulder Launched missiles are coming along with chafe, which is sent out the back. Generally by that time the missile has already gone by, but you take evasive maneuvers anyway incase of multiples. For instance it is a known fact that Vietnamese Anti-Aircraft SAMs shot three at a time per target. You could even have six if you were locked on by two sites. Only the very best pilots could keep track of six missiles, most who had such a situation bit the farm and are names on walls now. Considering my father completed 250 combat missions in an A-4 in Vietnam and lived to tell about it is a feat all onto itself.

Sensory overload can appear an unbeatable game, although I disagree. For instance when your aircraft is too low to wires approaching you could get a wiff of a certain bad smell. A sensor on your mouth-piece could send in a taste of a fruit or bitter something. When you are on perfect glide path it could send positive tastes. Why, because you cannot process as much data visually as you can in other ways and therefore visual input is not the best use of brain capacity or involvement in the game. It takes up too much information. Also when things are appearing not to be real, they may be perceived as more of a game and therefore not taken seriously when they should which would be the draw back to the idea of pastelling the real world into VR-simulation of the real world. Then one might ask what is real and what is not. Why does it matter as long as the mission is accomplished and the game is won? But aside from accomplishing project, mission or most expedient methods, let us further discuss the issues with Brain-Computer or VR interfaces and HUD.

Different parts of the brain light up as different tasks are being performed the Cerebellum is not used to the degree it could be and is often wasted when it can be used. We know this be studies of people doing tasks of certain types and that area of the brain lights up. For instance wiggling fingers or moving a joy stick. So then that part of the HUD should be incorporated such as shock in the fingers, vibrating of a part on the yoke or control stick or even a sensor connected to a different finger which simply tingles or vibrates, could even be on the opposite hand, toes, ear, etc. Anywhere where the brain lights up in that unused region. It allows for additional tasks or thoughts to be processed and input can be delivered in other ways. But for the commercial side of things it may be necessary to have various things synchronized and standardized so that the retired military personal who is trained that one sensation or image means the same thing in personal or civilian life.

The video gamming industry used mostly by younger generations should also follow the same codes and standards to insure that a seamless VR environment uses the best possible methods and skills are easily transferred with only a few rules learned. There will be complexity in the tasks even with few rules as are the nature of things anyway, so no sense in complicating future efforts by non-standardized methods now. KISS to that regard. Eventually the helmets worn by Apache and Fighter Pilots and Tank Operators will be fully integrated and the helmet will actually heat up the ambient temperature of the area of the brain, which needs to be used or cool it if it gets too hot. Perhaps running impulses at as high as 90 Hz at peak performance times, hopefully the body and adrenaline and natural substances are working at maxium, I can tell you from my days of motorcycle street racing that in fact when the going gets tough, the tough have bodies that are going, yet in control and loving it. You cannot explain it, and you would not understand unless you have been there.

Also the helmet might send ELF into the area of the brain to power up or stimulate that area. A brain is a muscle and when it works, it works well and when you don’t you use it in a combat situation it might take you to hell. Eventually as wearable computers integrate with the Fully Connected Human Being- FCH achiever, these ideas will seem rather obvious and natural. Since it is said that we as humans only use 20% of our brains (speak for your self) it only makes sense that we should use our technology to increase our abilities to helps us achieve more and innovate further to get us to where we should ultimately be with regards to evolving the species.

By watching and learning in VR and using such simple rules to depict the needs of the operator, we can learn how to use intuition to determine patterns in possible futures based on actions taken, experienced learned and the use of the machine. The FORCE if you will. The Jeti Warrior encased in a stream of data in a non-obtrusive environment performing beyond the ability of the Human Organism for the betterment of mankind. Once we learn these intuitions and understand the nature of things, patterns, cycles, etc, then we can create a perfect computer to do everything for us and to compile what is learned to further our needs. Care to comment on the current technology, area of research, abstract thoughts, future of man and machine?

As man and machine merge somewhat in the next period and man who was given this incredible brain to change his environment, surroundings, planet, life span, body and future we will know God for we will be one with him. One with everything, for we will be god most likely. Perhaps that's the plan, meanwhile we need to get busy to prepare for the future and to bodly go where we are destined.

"Lance Winslow" - Online Think Tank forum board. If you have innovative thoughts and unique perspectives, come think with Lance; www.WorldThinkTank.net/. Lance is an online writer in retirement.

This post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.