Maker Pro

What AI Can Do for Tech Development — and What It Can’t

February 22, 2020 by Sam Holland
Share
banner

While AI, or artificial intelligence, is already revolutionising all manner of industries, it is nevertheless nothing more than a system. We look at what AI is at its crux, consider its industrial implications, and ultimately cover its limitations.

Defining Technological Development in the Context of AI

Of course, even in an industrial context alone, ‘technological development’—and, in fact, ‘artificial intelligence’ itself—are both very broad terms.

Let’s hone in, therefore, on both an obvious industrial example of technological development and a particular engineering application of artificial intelligence: let’s use manufacturing and machine vision as the collective topic of interest.

Specifically, our example will focus on a product assembly line that uses a smart camera, whose embedded AI software is programmed to spot missing components on a product passing through a factory conveyor belt. (More information on image sensing in industry can be read on All About Circuits.)

This example gives us the scope to use modern manufacturing as a means to consider how far AI can go in achieving its own fault detection through machine vision—as well as, ultimately, the point of how much human intervention is still needed for such factory operations to function.

 

A diagram of a machine vision system.

An annotated diagram that shows a factory conveyor belt and its surrounding equipment, particularly machine vision technology (including a camera and an image process library). Image Credit: Wikimedia Commons.

 

What Artificial Intelligence Can Do at a Basic Level

To explain the potential of AI, let’s start by defining the basic dynamics of artificial intelligence processing, which may be summarised by the input process output (IPO) model. 

 

Introducing the IPO Model

From beginning to end, the three stages of the input process output methodology can be reduced to the following. Respectively:

1. Data is received (especially by a sensor in the industrial context we’re using here).

2. Machine learning algorithms are then used to recognise patterns in the input and make an inference accordingly.

3. The AI, hinging on the accuracy of its decision making, produces and/or acts on the result of the now-completed inference (the obvious—but far from sole—example being that it displays its conclusion on a user interface).

 

Applying the IPO Model to Factory Image Inspection

With the above IPO model established as the template of general AI information processing, let’s now apply it to the topic of machine vision in manufacturing (using a bottle as the example product):

Input

A bottle, which should have been filled at an earlier stage in the manufacturing process, enters a camera’s field of vision and is photographed. Within the camera is a digital sensor that passes the received information to the corresponding computer hardware and software (collectively referred to below as the ‘machine vision system’) to reach the decision-making stage.

Process

The machine vision system cross-references the camera’s rendering of that the bottle with the system’s image processing library—stored in which are photographs of what the observed bottle looks like when full—and it makes an inference depending on whether or not the bottle is deemed a match. (This is a basic example of AI-driven decision making.)

Output

Depending on the outcome of the AI-driven decision making, the bottle is (in what is known as the pass-fail response) either approved or flagged up as a product that’s unprepared for production.

Again, this is a basic example—but the IPO model that it adheres to is currently the foundational explanation of AI processing and decision making. And as a foundation, a lot of effort is going into building on it, particularly in terms of its processing stage.

What then, is an advanced example of such factory-based machine vision technology?

 

reCAPTCHA example.

The CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) test is a clear demonstration that a human’s visual perception dwarfs that of a machine. Pictured: an example of Google’s image-based CAPTCHA test. Image Credit: Google.

 

What Artificial Intelligence Can Do at an Advanced Level

A principle consideration, when it comes to optimising artificial intelligence, is that AI applications such as machine vision still hinge on human intervention.

This brings us to the human-to-AI task of data labelling, and the computer-based task of generative adversarial networks, or GANs. Both are principal elements of AI that show just why the aforementioned machine vision system—whose main purpose is to address the yes-or-no question, ‘Should this product pass or fail factory inspection?’—is a comparatively basic one.

An AI system that is subject to data labelling is one that is programmed to discriminate between multiple visual stimuli thanks to many people inputting the relevant nouns that apply to a long line of images (e.g., in the context of programming smart vehicles, ‘stop sign’, ‘traffic lights’, etc.). This is all in the interest of training machines to ‘see’ what they’re presented.

In view of the ever-rising number of machine vision applications (the industry’s interest in self-driving cars especially), data labelling is now so in demand employment-wise that it could be called an industry in itself. However, it does beg the question of how practical such a labour-intensive task is.

Generative adversarial networks (GANs) are a work-in-progress answer to data labelling’s demand for such human labour. It involves a neural network comprised of two major parts: one that generates both fake images and real-life photographs (the generator), and the other (the discriminator) that is programmed to distinguish the genuine pictures from the fakes—and learn from its mistakes when it fails to.

The results of such a self-teaching system have been so promising that one example of GAN-based technology (by software company DeepMind) has achieved mammography capabilities that may rival those of a radiologist.

Given that such AI-based image processing can achieve a medical breakthrough of this proportion, consider what it can do for industrial machine vision.

 

As it stands now, artificial intelligence cannot compare to the efficiency of human cognition, intuition, and common sense. Pictured: artificial intelligence graphics, including those that represent machine vision and data analytics. A businessman stands in the background. Image Credit: Bigstock.

 

What Artificial Intelligence Cannot Do

We’ve established that artificial intelligence is a work in progress, but one so promising that its machine vision applications can help to assist anything from factory product inspection through to and including medical diagnoses.

How AI fares against a human observer, however, is a different matter entirely. For the time being, it cannot compare to the efficiency of human cognition, intuition, and (especially) common sense.

The reason for this is chiefly because an individual’s ability to comprehend and discriminate various visual information is effortless. Consider, for instance, that a human engineer could manage the aforementioned pass-fail bottle inspection without a second thought (albeit far less dependably than a computer could).

Meanwhile, to achieve an AI system that could even approach a level of image comprehension as diverse as that of an individual’s perception, the level of human intervention required would be, once again, crucial. Engineers and other scientists would need to enhance their software with any combination of the below:

  • An indefinite number of people carrying out data labelling,

  • A GSN—general adversarial network—and, of course,

  • Plenty of general reference images for the given AI to learn from.

All of these points sum up an obstacle that is yet to be overcome in AI: the optimisation of artificial intelligence requires countless data. After all, it is a misunderstanding that an increase in data is necessarily key to automation—in fact, managing data is human labour-intensive. As Harvard University puts it: “The future of AI will be about less data, not more”.

Related Content

Comments


You May Also Like