Saving, investing and making money with technology

The Differences between Human Vision and Machine Vision and Why Business needs Both

Well, the main difference between human vision and machine vision is simply a matter of scope. Understanding the Perils of Artificial Intelligence … In a nutshell, Singularity refers to that point in human civilization when Artificial Intelligence reaches a tipping point beyond which it evolves into a super-intelligence that surpasses human cognitive powers, thereby potentially posing a threat to human existence as we know.

And you know, humans and machines both use neural networks for object and face recognition. Now evidence is emerging that both types of vision are flawed in the same way.

There’s a deep convolutional of neural networks that ‘s taken the world of artificial intelligence by storm. It’s been said that these machines now routinely out perform humans in tasks ranging from face and object recognition to playing the ancient game of Go.

The irony, of course, is that neural networks were inspired by the structure of the brain. It turns out that there are remarkable similarities between the broader structure of the deep convolutional neural networks behind machine vision and the structure of the brain responsible for vision. One of these evolved over millions of years, the other came about over the course of just a few decades. But both seem to work in the same way.

And that raises an interesting question—if machine vision and human vision work in similar ways, are they also restricted by the same limitations? Do humans and machines struggle with the same vision-related challenges?

As we all know, the second key difference is in where the interpretation takes place. In machine vision, the sensors which capture light don’t do any of the interpretation. But weirdly in human vision the first stages of interpretation take place on the retina. And the color and edge detection happen through ganglion cells on the retina.

Vision improves quality and productivity, while driving down manufacturing costs. Machine vision brings additional safety and operational benefits by reducing human involvement in a manufacturing process. And also, it prevents human contamination of clean rooms and protects human workers from hazardous environments.

But we have gotten the answer thanks to the work of Saeed Reza Kheradpisheh at the University of Tehran in Iran and a few pals from around the world. Those guys have tested humans and machines with the same vision challenges and discovered that they do indeed struggle with the same kind of problems.
First some background. The pathway in the brain responsible for vision operates in several layers, each of which is thought to extract progressively more information from an image, such as movement, shape, color, and so on.

And also, each layer consists of huge numbers of neurons connected into a vast network. Deep convolution neural networks have a similar structure. They too are made up of layers, and each of these is a network of circuits designed to mimic the behavior of neurons, hence the term neural network.

As you know the potential for deep convolutional neural networks help probe the way human cognition works. The design of certain images is a critical task in applications such as air traffic control, emergency exits, instructions for the use of lifesaving equipment and so on.

We know that Industrial Vision Systems are global specialists in high performance machine vision and vision systems. And I want you all to know that whichever way you look at it, they’ve definitely got you covered.

You know, for decades, machine vision systems have taught computers to perform inspections that detect defects, contaminants, functional flaws, and other irregularities in manufactured products. Human visual inspection prevails, however, in situations that require learning by example and appreciating acceptable deviations from the control. Machine vision, by contrast, offers the speed and robustness that only a computerized system can.

Because, if you are using humans to evaluate these images it’s a time-consuming and expensive business. But perhaps these kinds of neural networks could do the work instead or at least screen out the worst examples and leaving humans with a much better defined and less onerous task.

And also, besides that , it may be possible to design machine-vision systems that aren’t fooled in the same way humans and so could augment human decision making in critical situations such as driving.

tracy collins

http://www.moneyandtechnology.com

I am a freelance writer blogger social media marketer and content marketer with twelve years of experience in writing and blogging.

View more posts from this author

 

Loading…

 

Leave a Reply

Your email address will not be published. Required fields are marked *

CommentLuv badge