Sensing is what separates a humanoid robot from a humanoid statue. In advanced robotics, strain gage–based force and torque sensors are foundational. These sensors give humanoid robots the ability to feel, balance, and adapt to enable safe, fluid interaction with the world around them. At the heart of many systems are force/torque sensors, often built using foil strain gages in a full Wheatstone bridge configuration. These enable high-resolution detection of load vectors applied at the robot’s joints, limbs, or fingertips that allow for: 🔹 Backdrivability and compliance in robotic limbs 🔹 Tactile feedback during object manipulation 🔹 Force-limited operation for human-robot collaboration 🔹 Real-time collision detection and adaptive path planning To operate in dynamic environments, these sensors must be compact, low-noise, and robust against temperature drift, electromagnetic interference, and mechanical crosstalk. That’s where miniaturized data acquisition (DAQ) systems come in — often built directly into or near the sensor node to reduce latency and wiring strain. Our engineering team works closely with OEMs and integrators to tailor force and torque sensing packages that meet the exacting requirements of humanoid robotics — whether it's improving grip feedback in assistive exoskeletons or reducing residual forces in rehabilitation bots. Humanoid robots are evolving fast. But without the ability to sense force precisely — and to react to it in milliseconds — there's no safe, responsive movement. That’s why strain gage–based sensors aren’t just useful… they’re mission-critical.
Advanced Sensor Application in Robotics
Explore top LinkedIn content from expert professionals.
Summary
Advanced sensor application in robotics refers to the use of specialized sensors that allow robots to understand, interact with, and adapt to their environment—just like human senses help us perceive the world. These technologies give robots the ability to "feel," "see," and even "hear," making them more responsive and capable in tasks ranging from handling delicate objects to navigating complex settings.
- Integrate tactile sensing: Add touch sensors to robotic surfaces so the machine can detect force, texture, and grip, improving safe interaction with people and objects.
- Use hybrid vision systems: Combine traditional cameras with event-based sensors for fast and accurate visual tracking, helping robots better follow moving objects even in challenging conditions.
- Employ advanced audio perception: Embed microphones and vibration sensors in robotic hands to identify materials and objects through sound and touch, enabling more precise sorting, handling, and recognition in real-world environments.
-
-
Sensing pressure in soft robotics sounds simple. Until you actually try to do it. When surfaces inflate, fold, and twist... (Like in pneumatic robotic hands) ...Traditional sensing methods break down fast. Here’s what we had to engineer for Dynamic Geometry Changes; as the robotic actuator inflates, every contact point moves, stretching and reshaping the surface. Sensors have to stay accurate even when the underlying material is deforming. → False Positive Risk: Minor shifts in the substrate can trigger sensor readings, even without true external pressure. We needed a solution that distinguished real grip forces from surface noise. → Mechanical Coupling: Inflation forces don’t just stretch materials — they change the way pressure is transmitted across the entire device. That meant engineering the sensor stack to accommodate both tension and compression dynamics. → Distributed Sensing Architecture: Instead of using one sensor per finger, we developed layered force-sensitive resistive (FSR) arrays that could map pressure gradients across the entire surface. This kind of precision mapping doesn’t come from off-the-shelf solutions. It comes from: → Material systems built for flexion, inflation, and repetitive motion → Circuitry that tolerates mechanical distortion without false triggering → Calibration strategies that adjust sensitivity based on actuation states
-
Tactile sensing, or the sense of touch, is very important for humans and robots when they need to handle tasks involving a lot of contact. In robotic tactile sensing, there are three big challenges: 🤖 Understanding the signals from sensors 🤖 Predicting what the sensors will sense in new situations 🤖 Learning how to use the sensor data to make decisions. For visuotactile sensors, which combine vision and touch, interpreting the data is easier because they are similar to vision sensors like cameras. However, predicting sensor signals is still challenging because these sensors deal with contact, deformation, light, and images, which are all expensive and difficult to simulate. This also makes it hard to learn sensor-based policies since large-scale data collection through simulation is challenging. Researchers at NVIDIA have developed TacSL (pronounced "taxel"), a new library for simulating and learning with visuotactile sensors using GPUs. It allows users to simulate visuotactile images and get contact-force distributions over 200 times faster than previous methods, all within the popular Isaac Gym simulator. TacSL also offers a learning toolkit with various sensor models, training environments that involve a lot of contact, and online and offline algorithms to help with learning policies that can be applied from simulation to real-life scenarios. On the algorithm side, TacSL introduces a new online reinforcement-learning method called asymmetric actor-critic distillation (AACD). This method is designed to learn how to use tactile data effectively in simulations to transfer the knowledge to real-world applications. Finally, TacSL demonstrates its usefulness by showing how its library and algorithms improve contact-rich tasks and successfully transfer learning from simulations to the real world. 📝 Research Paper: https://lnkd.in/eRDWcSvF 📊 Project Page: https://lnkd.in/eDUFUjbp #robotics #research
-
Feature Detection and Tracking Redefined: Leveraging DAVIS for Hybrid Vision Applications "Feature Detection and Tracking with the Dynamic and Active pixel Vision Sensor DAVIS" This research presents the first algorithm to detect and track visual features using the hybrid capabilities of the DAVIS sensor, which combines a standard camera and an event-based sensor in a single-pixel array. Key contributions include: - Detection of visual features in grayscale frames and asynchronous tracking during the blind time between frames using the event stream. - Feature design optimized for the DAVIS, leveraging large spatial contrast variations (visual edges) that generate most events. - An event-based iterative geometric registration algorithm for robust feature tracking. Advantages: - Provides high-frequency measurement updates during blind times. - Enables robust performance in high-speed vision and robotics applications. Evaluation: - The method is tested on real DAVIS sensor data, demonstrating its effectiveness. Video - https://lnkd.in/ecDDqkGN Paper - https://lnkd.in/eFygS69j -------------------------------- Join my WhatsApp Robotics Channel - https://lnkd.in/dYxB9iCh Join our Robotics Community - https://lnkd.in/e6twxYJF Opportunity_21: https://lnkd.in/ejcs8EEb -------------------------------- #robotics
-
Robots that can hear? The future of machine perception is getting sharper. SonicSense is making it possible Researchers at Duke University have created SonicSense, a system that’s changing how robots perceive and interact with the world. What is SonicSense? Its an innovative perception system allowing robots to analyze objects by interpreting vibrations through contact. How does it work? Using microphones embedded in a robotic hand’s fingertips, SonicSense collects vibrations produced by tapping, grasping, or shaking objects. This data enables the robot to identify the object's material, shape, and even its contents. And these are all made possible by… → Vibration analysis It captures key details like texture and material composition through interactions. → AI-Driven learning Advanced AI analyze these vibrations, matching them to a database of known objects. For familiar items, the system can recognize them after just a few touches. Unknown objects may require more interactions (at least 20 interactions) but become part of the database, making the robot smarter over time. Researchers plan to extend SonicSense’s capabilities by developing: → Object-tracking algorithms For better interaction in complex, cluttered settings. → Enhanced dexterity Aiming for an even closer mimicry of human touch. SonicSense marks a new era in robotics. They’re enabling machines to "sense" and adapt to complex real-world environments. This leap could revolutionize fields like manufacturing, healthcare, and logistics. Are you ready for a future where robots can truly feel and understand our world? Want to hear more innovation stories like this? Join a community of like-minded change makers at Lighthouse. Link in comments section! 👇 #AI #robotics #innovation #SonicSense