14 Sep 2019Call for Nominations: Editors and Associate Editors of IEEE Trans. on Automation...
5 Sep 2019The Deadline to vote is 16 October 2019! RAS members are voting to elect six new RAS...
22 Aug 2019RAS is pleased to arrange a Student Social Evening and a Student...
Latest in Robotics and Automation
17 Sep 2019
We humans spend most of our time getting hungry or eating, which must be really inconvenient for the people who have to produce food for everyone. For a sustainable and tasty future, we’ll need to make the most of what we’ve got by growing more food with less effort, and that’s where the robots can help us out a little bit.
FarmWise, a California-based startup, is looking to enhance farming efficiency by automating everything from seeding to harvesting, starting with the worst task of all: weeding. And they’ve just raised US $14.5 million to do it.
FarmWise’s autonomous, AI-enabled robots are designed to solve farmers’ most pressing challenges by performing a variety of farming functions - starting with weeding, and providing personalized care to every plant they touch. Using machine learning models, computer vision and high-precision mechanical tools, FarmWise’s sophisticated robots cleanly pick weeds from fields, leaving crops with the best opportunity to thrive while eliminating harmful chemical inputs. To date, FarmWise’s robots have efficiently removed weeds from more than 10 million plants.
FarmWise is not the first company to work on large mobile farming robots. A few years ago, we wrote about DeepField Robotics and their giant weed-punching robot. But considering how many humans there are, and how often we tend to get hungry, it certainly seems like there’s plenty of opportunity to go around.
Weeding is just one thing that farm robots are able to do. FarmWise is collecting massive amounts of data about every single plant in an entire field, practically on the per-leaf level, which is something that hasn’t been possible before. Data like this could be used for all sorts of things, but generally, the long-term hope is that robots could tend to every single plant individually—weeding them, fertilizing them, telling them what good plants they are, and then mercilessly yanking them out of the ground at absolute peak ripeness. It’s not realistic to do this with human labor, but it’s the sort of data-intensive and monotonous task that robots could be ideal for.
The question with robots like this is not necessarily whether they can do the job that they were created for, because generally, they can—farms are structured enough environments that they lend themselves to autonomous robots, and the tasks are relatively well defined. The issue right now, I think, is whether robots are really time- and cost-effective for farmers. Capable robots are an expensive investment, and even if there is a shortage of human labor, will robots perform well enough to convince farmers to adopt the technology? That’s a solid maybe, and here’s hoping that FarmWise can figure out how to make it work.
[ FarmWise ]
17 Sep 2019
After 25 million games, the AI agents playing hide-and-seek with each other had mastered four basic game strategies. The researchers expected that part.
After a total of 380 million games, the AI players developed strategies that the researchers didn’t know were possible in the game environment—which the researchers had themselves created. That was the part that surprised the team at OpenAI, a research company based in San Francisco.
The AI players learned everything via a machine learning technique known as reinforcement learning. In this learning method, AI agents start out by taking random actions. Sometimes those random actions produce desired results, which earn them rewards. Via trial-and-error on a massive scale, they can learn sophisticated strategies.
In the context of games, this process can be abetted by having the AI play against another version of itself, ensuring that the opponents will be evenly matched. It also locks the AI into a process of one-upmanship, where any new strategy that emerges forces the opponent to search for a countermeasure. Over time, this “self-play” amounted to what the researchers call an “auto-curriculum.”
According to OpenAI researcher Igor Mordatch, this experiment shows that self-play “is enough for the agents to learn surprising behaviors on their own—it’s like children playing with each other.”
Reinforcement is a hot field of AI research right now. OpenAI’s researchers used the technique when they trained a team of bots to play the video game Dota 2, which squashed a world-champion human team last April. The Alphabet subsidiary DeepMind has used it to triumph in the ancient board game Go and the video game StarCraft.
Aniruddha Kembhavi, a researcher at the Allen Institute for Artificial Intelligence (AI2) in Seattle, says games such as hide-and-seek offer a good way for AI agents to learn “foundational skills.” He worked on a team that taught their AllenAI to play Pictionary with humans, viewing the gameplay as a way for the AI to work on common sense reasoning and communication. “We are, however, quite far away from being able to translate these preliminary findings in highly simplified environments into the real world,” says Kembhavi.
In OpenAI’s game of hide-and-seek, both the hiders and the seekers received a reward only if they won the game, leaving the AI players to develop their own strategies. Within a simple 3D environment containing walls, blocks, and ramps, the players first learned to run around and chase each other (strategy 1). The hiders next learned to move the blocks around to build forts (2), and then the seekers learned to move the ramps (3), enabling them to jump inside the forts. Then the hiders learned to move all the ramps into their forts before the seekers could use them (4).
The two strategies that surprised the researchers came next. First the seekers learned that they could jump onto a box and “surf” it over to a fort (5), allowing them to jump in—a maneuver that the researchers hadn’t realized was physically possible in the game environment. So as a final countermeasure, the hiders learned to lock all the boxes into place (6) so they weren’t available for use as surfboards.
In this circumstance, having AI agents behave in an unexpected way wasn’t a problem: They found different paths to their rewards, but didn’t cause any trouble. However, you can imagine situations in which the outcome would be rather serious. Robots acting in the real world could do real damage. And then there’s Nick Bostrom’s famous example of a paper clip factory run by an AI, whose goal is to make as many paper clips as possible. As Bostrom told IEEE Spectrum back in 2014, the AI might realize that “human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.”
Bowen Baker, another member of the OpenAI research team, notes that it’s hard to predict all the ways an AI agent will act inside an environment—even a simple one. “Building these environments is hard,” he says. “The agents will come up with these unexpected behaviors, which will be a safety problem down the road when you put them in more complex environments.”
AI researcher Katja Hofmann at Microsoft Research Cambridge, in England, has seen a lot of gameplay by AI agents: She started a competition that uses Minecraft as the playing field. She says the emergent behavior seen in this game, and in prior experiments by other researchers, shows that games can be a useful for studies of safe and responsible AI.
“I find demonstrations like this, in games and game-like settings, a great way to explore the capabilities and limitations of existing approaches in a safe environment,” says Hofmann. “Results like these will help us develop a better understanding on how to validate and debug reinforcement learning systems–a crucial step on the path towards real-world applications.”
Baker says there’s also a hopeful takeaway from the surprises in the hide-and-seek experiment. “If you put these agents into a rich enough environment they will find strategies that we never knew were possible,” he says. “Maybe they can solve problems that we can’t imagine solutions to.”
17 Sep 2019
Argonne National Laboratory and Lawrence Livermore National Laboratory will be among the first organizations to install AI computers made from the largest silicon chip ever built. Last month, Cerebras Systems unveiled a 46,225-square millimeter chip with 1.2 trillion transistors designed to speed the training of neural networks. Today, such training is often done in large data centers using GPU-based servers. Cerebras plans to begin selling computers based on the notebook-size chip in the 4th quarter of this year.
“The opportunity to incorporate the largest and fastest AI chip ever—the Cerebras WSE—into our advanced computing infrastructure will enable us to dramatically accelerate our deep learning research in science, engineering, and health” Rick Stevens, head of computing at Argonne National Laboratory, said in a press release. “It will allow us to invent and test more algorithms, to more rapidly explore ideas, and to more quickly identify opportunities for scientific progress.”
Argonne and Lawrence Livermore are the first DOE entities to participate in what is expected to be a multi-year, multi-lab partnership. Cerebras plans to expand to other laboratories in the coming months.
Cerebras computers will be integrated into existing supercomputers at the two DOE labs to act as AI accelerators for those machines. In 2021, Argonne plans to become home to the United States’ first exascale computer, named Aurora; it will be capable of more than 1 billion billion calculations per second. Intel and Cray are the leaders on that $500 million project. The national laboratory is already home to Mira, the 24th-most powerful supercomputer in the world, and Theta, the 28th-most powerful. Lawrence Livermore is also on track to achieve exascale with El Capitan, a $600-million, 1.5-exaflop machine set to go live in late 2022. The lab is also home to the number-two-ranked Sierra supercomputer and the number-10-ranked Lassen.
The U.S. Energy Department established the Artificial Intelligence and Technology Office earlier this month to better take advantage of AI for solving the kinds of problems the U.S. national laboratories tackle.
- Plant Phenotyping by Deep-Learning-Based Planner for Multi-Robots
- Hand-Eye Calibration With a Remote Centre of Motion
- Robust and Adaptive Lower Limb Prosthesis Stance Control via Extended Kalman Filter-Based Gait Phase Estimation
- A Communication-Aware Mutual Information Measure for Distributed Autonomous Robotic Information Gathering
- Monocular Object and Plane SLAM in Structured Environments
- Toward Ergonomic Risk Prediction via Segmentation of Indoor Object Manipulation Actions Using Spatiotemporal Convolutional Networks
- A Compact Soft Articulated Parallel Wrist for Grasping in Narrow Spaces
- Submodular Optimization for Coupled Task Allocation and Intermittent Deployment Problems
- A Testbed for Haptic and Magnetic Resonance Imaging-Guided Percutaneous Needle Biopsy
- Front Cover
- Table of Contents
- Staff List
- Decisions, Decisions [From the Editor's Desk]
- Open Access: How Best to Prepare to Master This Challenge [President's Message]
- Where Do We Go From Here? Debates on the Future of Robotics Research at ICRA 2019 [From the Field]
- Unintended Consequences of Biased Robotic and Artificial Intelligence Systems [Ethical, Legal, and Societal Issues]
- Adaptable Workstations for Human-Robot Collaboration: A Reconfigurable Framework for Improving Worker Ergonomics and Productivity
- From Natural Complexity to Biomimetic Simplification: The Realization of Bionic Fish Inspired by the Cownose Ray
- Table of contents
- IEEE Transactions on Automation Science and Engineering
- Decentralized EV-Based Charging Optimization With Building Integrated Wind Energy
- Real-Time Detection of Fall From Bed Using a Single Depth Camera
- Concurrent Fault Detection and Anomaly Location in Closed-Loop Dynamic Systems With Measured Disturbances
- Multitasking Multiobjective Evolutionary Operational Indices Optimization of Beneficiation Processes
- Scheduling Dual-Armed Cluster Tools for Concurrent Processing of Multiple Wafer Types With Identical Job Flows
- An Echo State Gaussian Process-Based Nonlinear Model Predictive Control for Pneumatic Muscle Actuators
- Active Target Tracking With Self-Triggered Communications in Multi-Robot Teams
- Table of Contents
- IEEE Transactions on Robotics
- Tracking 3-D Motion of Dynamic Objects Using Monocular Visual-Inertial Sensing
- Adaptive Motion Planning for a Collaborative Robot Based on Prediction Uncertainty to Enhance Human Safety and Work Efficiency
- Human-Humanoid Collaborative Carrying
- Single-Agent Indirect Herding of Multiple Targets With Uncertain Dynamics
- Stochastic Control for Orientation and Transportation of Microscopic Objects Using Multiple Optically Driven Robotic Fingertips
- Probabilistic Real-Time User Posture Tracking for Personalized Robot-Assisted Dressing
- Statistical Coverage Control of Mobile Sensor Networks
Sep1818 Sep 2019 - 20 Sep 2019CBS 2019- International Conference on Cyborg and Bionic Systems
Sep1919 Sep 2019 - 21 Sep 2019ISMCR 2019 - International Symposium on Measurement and Control in Robotics
Sep3030 Sep 2019ROBIO 2019- Call for Papers
Oct11 Oct 2019HRI 2020- Call for Papers
Oct33 Oct 2019 - 4 Oct 2019Do Good Robotics Symposium (DGRS)
Oct1414 Oct 2019 - 18 Oct 2019Ro-Man 2019 - International Conference on Robot & Human Interactive Communication
Oct1515 Oct 2019RoboSoft 2020- Call for papers
Oct1515 Oct 2019 - 17 Oct 2019Humanoids 2019 - International Conference on Humanoid Robots