Poor eyesight remains one of the main obstacles to letting robots loose among humans. But it is improving, in part by aping natural vision
ROBOTS are getting smarter and more agile all the time. They disarm bombs, fly combat missions, put together complicated machines, even play football. Why, then, one might ask, are they nowhere to be seen, beyond war zones, factories and technology fairs? One reason is that they themselves cannot see very well. And people are understandably wary of purblind contraptions bumping into them willy-nilly in the street or at home.
All that a camera-equipped computer sees is lots of picture elements, or pixels. A pixel is merely a number reflecting how much light has hit a particular part of a sensor. The challenge has been to devise algorithms that can interpret such numbers as scenes composed of different objects in space. This comes naturally to people and, barring certain optical illusions, takes no time at all as well as precious little conscious effort. Yet emulating this feat in computers has proved tough.
In natural vision, after an image is formed in the retina it is sent to an area at the back of the brain, called the visual cortex, for processing. The first nerve cells it passes through react only to simple stimuli, such as edges slanting at particular angles. They fire up other cells, further into the visual cortex, which react to simple combinations of edges, such as corners. Cells in each subsequent area discern ever more complex features, with those at the top of the hierarchy responding to general categories like animals and faces, and to entire scenes comprising assorted objects. All this takes less than a tenth of a second.
The outline of this process has been known for years and in the late 1980s Yann LeCun, now at New York University, pioneered an approach to computer vision that tries to mimic the hierarchical way the visual cortex is wired. He has been tweaking his convolutional neural networks ever since.
Seeing is believing
A ConvNet begins by swiping a number of software filters, each several pixels across, over the image, pixel by pixel. Like the brains primary visual cortex, these filters look for simple features such as edges. The upshot is a set of feature maps, one for each filter, showing which patches of the original image contain the sought-after element. A series of transformations is then performed on each map in order to enhance it and improve the contrast. Next, the maps are swiped again, but this time rather than stopping at each pixel, the filter takes a snapshot every few pixels. That produces a new set of maps of lower resolution. These highlight the salient features while reining in computing power. The whole process is then repeated, with several hundred filters probing for more elaborate shapes rather than just a few scouring for simple ones. The resulting array of feature maps is run through one final set of filters. These classify objects into general categories, such as pedestrians or cars.
Many state-of-the-art computer-vision systems work along similar lines. The uniqueness of ConvNets lies in where they get their filters. Traditionally, these were simply plugged in one by one, in a laborious manual process that required an expert human eye to tell the machine what features to look for, in future, at each level. That made systems which relied on them good at spotting narrow classes of objects but inept at discerning anything else.
Dr LeCuns artificial visual cortex, by contrast, lights on the appropriate filters automatically as it is taught to distinguish the different types of object. When an image is fed into the unprimed system and processed, the chances are it will not, at first, be assigned to the right category. But, shown the correct answer, the system can work its way back, modifying its own parameters so that the next time it sees a similar image it will respond appropriately. After enough trial runs, typically 10,000 or more, it makes a decent fist of recognising that class of objects in unlabelled images.
This still requires human input, though. The next stage is unsupervised learning, in which instruction is entirely absent. Instead, the system is shown lots of pictures without being told what they depict. It knows it is on to a promising filter when the output image resembles the input. In a computing sense, resemblance is gauged by the extent to which the input image can be recreated from the lower-resolution output. When it can, the filters the system had used to get there are retained.
In a tribute to natures nous, the lowest-level filters arrived at in this unaided process are edge-seeking ones, just as in the brain. The top-level filters are sensitive to all manner of complex shapes. Caltech-101, a database routinely used for vision research, consists of some 10,000 standardised images of 101 types of just such complex shapes, including faces, cars and watches. When a ConvNet with unsupervised pre-training is shown the images from this database it can learn to recognise the categories more than 70% of the time. This is just below what top-scoring hand-engineered systems are capable ofand those tend to be much slower.
This approach need not be confined to computer-vision. In theory, it ought to work for any hierarchical system: language processing, for example. In that case individual sounds would be low-level features akin to edges, whereas the meanings of conversations would correspond to elaborate scenes.
For now, though, ConvNet has proved its mettle in the visual domain. Google has been using it to blot out faces and licence plates in its Streetview application. It has also come to the attention of DARPA, the research arm of Americas Defence Department. This agency provided Dr LeCun and his team with a small roving robot which, equipped with their system, learned to detect large obstacles from afar and correct its path accordinglya problem that lesser machines often, as it were, trip over. The scooter-sized robot was also rather good at not running into the researchers. In a selfless act of scientific bravery, they strode confidently in front of it as it rode towards them at a brisk walking pace, only to see it stop in its tracks and reverse. Such machines may not quite yet be ready to walk the streets alongside people, but the day they can is surely not far off.
体坛英语资讯:Preview: Man Utd vs Liverpool the highlight of fascinating and tense Premier League weekend
深改组:“改革促进派”将获重用
商界大佬眼中的“互联网+”
苹果推出“流媒体音乐服务”
体坛英语资讯:Madrid Clasico postponed due to unrest in Catalan region
京杭试水“无人超市”
高校“学匪”根源探究
Oppo旗下的Realme手机席卷印度,声势惊人
人类命运共同体里的中国担当
“抗日神剧”还能有多雷
什么是“抵押补充贷款”?
学会笃信生活的魔力
“导盲犬”可乘坐北京地铁
文化部严查农村“脱衣舞表演”
见钱就收“蚁贪”也是蛮拼的
东莞建第一座“无人工厂”
体坛英语资讯:Rios Maracana stadium to host 2020 Copa Libertadores final
母女“联名发表”论文引质疑
The Coming of Thanksgiving Day 感恩节的由来
丝绸之路与欧亚经济联盟对接
国内英语资讯:Chinese premier meets intl institutions leaders on world economy
“外交访问”有多少种?
一周热词回顾(6.8-6.14)
《牛津词典》公布2019年度词汇:“气候紧急状态”
美国囚犯心跳停止后被救活,说自己终身监禁已结束
国际英语资讯:AIIB membership complements Irelands development programs: minister
沙特阿美,全世界最能赚钱的公司要上市了
打造生态环境的“生命共同体”
庆安官场被“链式举报”
“胡萝卜加大棒”战略
| 不限 |
| 英语教案 |
| 英语课件 |
| 英语试题 |
| 不限 |
| 不限 |
| 上册 |
| 下册 |
| 不限 |