More about Robotics and AI
First Time Mass Production Is Automated
By the 18th century in the United Kingdom and France, weaving was a major but labor-intensive industry; weavers required assistants to raise and lower threads to produce patterns. Inventors tried to automate the process, and in 1804 a French inventor named Joseph-Marie Jacquard unveiled what would become the widely adopted “Jacquard Loom.” It worked by translating patterns from punch cards into commands that determined the lifting and lowering of threads, increasing the speed at which complex patterns could be woven by more than twenty times, from one inch to two feet a day. The loom became the first widely used system that could follow a program; in this sense it was the first example of computer programming. Video: Henry Ford Museum.
The Term Robot Is First Used
In his play “R.U.R.: Rossum’s Universal Robots,” the Czech writer Karel Čapek tells the tale of a factory in which thousands of synthetic humanoids have been created. They work so cheaply and tirelessly that they shrunk production costs of weaving material by 80 percent. Čapek named the devices “robots,” after the Czech word robota, referring to the forced labor of serfs. The play not only gave robots their modern name, but heightened the existential fear that robots will someday replace people, as Čapek’s robots ultimately rise up and kill humanity. Video: Public Domain.
In 1949, an American-born British neurophysiologist and inventor named William Grey Walter introduced a pair of battery-powered, tortoise-shaped robots that could maneuver around objects in a room, guide themselves toward a source of light and find their way back to a charging station using the same components that remain crucial to robotics today: sensor technology, a responsive feedback loop, and logical reasoning. Photo: Dr. F. W. and J. Merlyn Clutterbuck/National Museum of American History.
First Robotic Arm is Installed on a Factory Floor
Known as “Unimate,” the first industrial robotic arm went to work in a General Motors plant, lifting and stacking hot, die-cut metal parts. Created by George Devol and his partner Joseph Engelberger, it could move up and down on the X and Y axis, possessed a rotatable, pincer-like gripper, and could follow a program of up to 200 movements stored in its memory. Deployable for numerous tasks, most particularly some that were too taxing or dangerous for humans — like lifting 75-pound loads without tiring and working amid toxic fumes — the Unimate began the transformation of the auto industry into an arena of widespread automation. Photo: Kawasaki.
First Small, Electric-Powered Six-Axis Robot
Unimate’s robots were large and powered by hydraulics, causing them to leak and thus limiting where they could be used. In 1969 Victor Scheinman designed a small robot arm with joints powered by electric motors embedded in the arm itself. The “Stanford Arm,” as it was dubbed when Scheinman built prototypes, could move much more quickly than previous robots, and without the mess of hydraulics. This opened the door for robotics to think about using robots in drier, indoor environments, or even on desks (the original prototype weighed only 15 pounds). It also had six axes of movement, or “six degrees of freedom,” allowing it to more closely approximate the range of a human arm. And it was the first robotic arm to be controlled not just by turn-by-turn instructions stored in memory, as with the Unimate, but by software in a computer. This meant that the Stanford Arm could perform calculations in real time and, in later iterations, react to its environment (such as with touch sensors or a vision system). The era of faster-moving industrial robots with fine-grained computer control was born. Photo: Les Earnest/Stanford Robotics Group
First Robot to Use Artificial Intelligence
It was known as Shakey because of the stuttering way it moved around, but what was most distinguishing about this robot, created by a group of engineers at the Stanford Research Institute, was that it included pioneering . If you gave Shakey a goal — such as navigating its way across a room or pushing a box along the floor — it could accomplish it by observing the world around it, creating a plan, and executing. With sensors that included a TV camera, a range finder and touch-sensitive metal whiskers, Shakey would gather data that enabled it to build a model of its environment and then use a “planning” program to generate its next moves. This idea of a separate “planning” layer was such a crucial innovation that it is still central to many robotic systems today. Photo: The Stanford Library.
First “Pick and Place” Robot
While six-axis Unimate-style arms can lift heavy payloads and manipulate them with precision, not all industrial labor requires strength. In 1978, the Japanese automation researcher Hiroshi Makino designed the four-axis SCARA, or the “Selective Compliance Assembly Robot Arm,” engineered simply to pick something up, swivel around, and plop it down somewhere else with precision — all in one smooth motion. It is the first example of what has come to be known as “pick and place” robots. SCARA arms are generally less flexible and not as strong as six-axis arms, but they are much faster, able to rapidly insert small electronic components into place. The arms sped up the manufacturing of everything from computer chips to watches and are still commonly used in global manufacturing today. Photo: Yamanashi University in Japan.
First “Sociable” Robot Designed to Provoke and React to Emotions
Cynthia Breazeal believes that if we are truly going to work alongside robots, to trust them and invite them into our homes, robots will need to be able to read people’s emotions and appear to have a personality. With this in mind, she set to work creating Kismet, a robotic head designed to provoke — and react to — emotions. Twenty-one motors controlled an expressive pair of yellow eyebrows, red lips, pink ears and big blue eyes, allowing Kismet to express a range of emotions, from happy to bored. Audio sensors and algorithms picked up vocal tone, so the robot would look downcast if you yelled at it, or curious if you spoke gently. With Kismet, Breazeal proved the stickiness and appeal of a robot that has charm — laying the groundwork for the many voice assistants like Alexa, Siri and Google Home that are now colonizing the world’s homes. Photo: Sam Ogden/Science Source.
Roomba Invades the World’s Living Rooms
A crucial innovation of iRobot, founded by a group of MIT researchers in 1990, emerged from research the founder did for the U.S. military — when they were working on a robot to check areas for land mines. The group came up with an algorithm that enabled the robot to explore every square foot of a given space. Handily, the same concept works as a control mechanism for a robotic vacuum; rooms need to be swept in their entirety. The Roomba was the first functional robot to become a bona fide hit with the public, selling 15 million to date, proving that if a robot is useful enough, it doesn’t matter if it’s not a full “android” — people will welcome it into their lives. Photo: Aventine.
Kiva Robots Re-Engineer the Warehouse
Early this century, Mick Mountz had an idea: Rather than have shipping-center employees find and fetch items in vast warehouses, why not have robots do that work? He and his cofounders created the Kiva robot: a squarish, close-to-the-ground orange bot (not too different from an extra-large Roomba) that can glide around warehouses, moving racks of goods. Kiva used some inexpensive off-the-rack components, which could make the robots less precise in how it moved about, but Kiva’s engineers compensated with software that course-corrected on the fly. The result was an autonomous machine that was far more flexible at automating a warehouse than a traditional conveyor-belt system, and relatively easy to use. Kiva’s system revolutionized the efficiency of warehouse and shipping. Amazon bought the company for $775 million in 2012. Photo: Getty Images.
“BigDog” Tramps Through the Mud
Boston Dynamics’ “BigDog” is a YouTube favorite thanks to its eerily lifelike performances. Over the course of several years the four-legged robot can be seen tramping through rough terrain — leafy forests, 60-degree hills, knee-deep snow and piles of bricks. It isn’t fully autonomous; a human controller pilots it, so it doesn’t need a sophisticated planning and vision system. But it has 50 sensors and an onboard computer that manages the gait and keeps it stable. Most notably, BigDog is able to bounce along with only two feet touching the ground at a time, making it far more nimble than rolling robots, which are generally limited to areas that have flat surfaces, like warehouses and pavement. BigDog’s mobility points to a time when everyday mass-produced robots could readily navigate front lawns, curbs or stairs, opening up new possibilities for everything from package delivery to in-home personal care. Photo: Credit Boston Dynamics.
Self-Driving Cars Pass Their First Big Test
The modern age of self-driving cars was launched on October 8, 2005, when a Volkswagen Touareg named “Stanley” won the second DARPA Grand Challenge — to complete a rough and often harrowing 131.2-mile course in the Mojave Desert within 10 hours. The race had been established the previous year by the Defense Department’s Defense Advanced Research Projects Agency (DARPA) to spur competition and innovation in military autonomous vehicle tech, but none of the cars in the earlier competition had been able to go more than eight miles. What fueled Stanley’s victory was a constellation of improvements, including AI trained on the driving habits of real-world humans and five “Lidar” laser sensors, a technology that enabled the car to identify objects within a 25-meter range in front of the vehicle. Lidar, which stands for “light detection and ranging,” has since become a key component of robotic vision systems in cars and even some Kiva-style warehouse robots; in fact, the leading Lidar firm, Velodyne, was spun off from one of Stanley’s competitors in the race. Photo: Stanford Racing Team.
Deep Learning Takes Off
In 2012, the British-born artificial-intelligence expert Geoffrey Hinton and a small team at the University of Toronto produced a stunning advance in AI by creating the most accurate visual-recognition system the world had yet seen. It was, and is, based on , an AI technique that enables a computer to recognize images through exposure to massive amounts of photographic data. The concept of training a had existed for decades, but it had languished. Hinton had long suspected that what was needed was far more processing power, and many more images to train on, and by 2012 he was finally getting his wish, thanks to the huge number of digital images suddenly available thanks to smartphones and the internet. In the 2012 ImageNet competition, Hinton’s team created a system that could identify and sort more than a million images with an error rate of only 15.3 percent, 10 points better than the closest rival. Within months, AI companies were flocking to “deep learning,” and firms like Google were releasing open-source tools that let any tiny start-up easily train neural nets. Thanks to Hinton and his team, today even the smallest, start-ups can create robots that recognize everyday objects. Photo: Aaron Vincent Elkaim/The New York Times, via Redux.
Geoffrey Hinton, whose team drove a major breakthrough in deep learning.