Open-Source Fleet Management Tools for Autonomous Mobile Robots
At ROSCon 2022, NVIDIA announced the newest Isaac ROS software release, Developer Preview (DP) 2. This release includes new cloud– and edge-to-robot task management and monitoring software for autonomous mobile robot (AMR) fleets and additional features for ROS 2 developers.
So many tools are now available to help jumpstart robotics. We don’t have to reinvent the wheel all the time. What was once custom, differentiated core tech is now becoming a commodity.
Math Behind Tesla Bot's Leg Actuator Lifting a Half-Ton Piano
This actuator is the “muscle” of a robot, just like in human bodies.
It only needs to move a few centimetres with great force. The leg and kinematic structures will amplify the foot's motion by a factor of >10x, with a proportional reduction in force.
This is why the actuator generates a lot of force even though the robot is not heavy.
Let’s guesstimate power from the video:
It takes about two seconds to lift the piano approximately 5cm high.
Let’s say the mass of the piano is 500kg (“half a ton”).
Power = Force * Distance / Time.
So about 125W of mechanical power output (very ballpark).
Assuming an overall motor efficiency of 70% (electric motor with an inverted planetary roller screw), the required electric input power would be around 180W.
Hand Guiding Collaborative Robots Will Soon Be a Thing of the Past
Interactive Language: A framework for building interactive, real-time, natural language-instructable robots in the real world
A framework for building interactive, real-time, natural language-instructable robots in the real world, and we open-sourced related assets (dataset, environment, benchmark, and policies). Trained with behavioural cloning on a dataset of hundreds of thousands of language-annotated trajectories, a produced policy can proficiently execute an order of magnitude more commands than previous works: specifically we estimate a 93.5% success rate on a set of 87,000 unique natural language strings specifying raw end-to-end visuolinguo-motor skills in the real world. We find that a human can guide the same policy via real-time language to address a wide range of precise long-horizon rearrangement goals, e.g. "make a smiley face out of blocks." The dataset we release comprises nearly 600,000 language-labelled trajectories, an order of magnitude larger than prior available datasets. We hope the demonstrated results and associated assets enable further advancement of helpful, capable, natural-language-interactable robots.