CaHill Resources and subsidiary CAHill TECH have a mission to solve the labor gap in the trade space, with an initial focus on heavy highway, road, and bridge construction.
The company is committed to the vision of making trade-based training available to anyone, anytime. Using a digital platform and mobile application, they provide risk reduction and operational savings to construction companies that employ millions of frontline workers.
aQuiRe™ offers over 350 modules in their library, empowering users with knowledge about different subjects like Site Operations; Machine Inspection & Maintenance; OSHA & Field Safety, and much more.
In addition, aQuiRe Construction Academy is serving the “entry organizations” side of the market and offers diverse learning materials, including Module Videos, Resources, and Quizzes to cater to different learning styles. Whether a participant learns best through visual, auditory, or written means, the program provides an array of resources to support their unique needs. CaHill Resources is a certified WBE-DBE organization.
Currently, they support 29 municipal/private clients in New York State. They are also on the Eligible Training Provider List (ETPL) which will help gather students to complete the aQuiRe Construction Academy and receive construction training.
Upon the completion of a set of modules, learners earn a badge or micro-credential, signifying their achievement in the related Modules of Study. As participants progress and complete multiple micro-credentials under the same Library, they can earn different or multiple stackable credentials. These stackable credentials demonstrate that participants have acquired valuable knowledge and skills in construction training.
For these reasons and more, CaHill Resources earned a Cool Tool Award (finalist) for “Best Badging/Credentialing Solution” as part of The EdTech Awards 2024 and aQuiRe™ earned a Cool Tool Award Winner for “Best Mobile App Solution” as part of The EdTech Awards 2022 from EdTech Digest. Learn more.
Deepfakes, hyper-realistic videos and audio created using artificial intelligence, present a growing threat in today’s digital world. By manipulating or fabricating content to make it appear authentic, deepfakes can be used to deceive viewers, spread disinformation, and tarnish reputations. Their misuse extends to political propaganda, social manipulation, identity theft, and cybercrime.
As deepfake technology becomes more advanced and widely accessible, the risk of societal harm escalates. Studying deepfakes is crucial to developing detection methods, raising awareness, and establishing legal frameworks to mitigate the damage they can cause in personal, professional, and global spheres. Understanding the risks associated with deepfakes and their potential impact will be necessary for preserving trust in media and digital communication.
That is where Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, comes in.
Chinmay Hegde, an Associate Professor of Computer Science and Engineering and Electrical and Computer Engineering at NYU Tandon, is developing challenge-response systems for detecting audio and video deepfakes.NYU Tandon
“Broadly, I’m interested in AI safety in all of its forms. And when a technology like AI develops so rapidly, and gets good so quickly, it’s an area ripe for exploitation by people who would do harm,” Hegde said.
A native of India, Hegde has lived in places around the world, including Houston, Texas, where he spent several years as a student at Rice University; Cambridge, Massachusetts, where he did post-doctoral work in MIT’s Theory of Computation (TOC) group; and Ames, Iowa, where he held a professorship in the Electrical and Computer Engineering Department at Iowa State University.
Hegde, whose area of expertise is in data processing and machine learning, focuses his research on developing fast, robust, and certifiable algorithms for diverse data processing problems encountered in applications spanning imaging and computer vision, transportation, and materials design. At Tandon, he worked with Professor of Computer Science and Engineering Nasir Memon, who sparked his interest in deepfakes.
“Even just six years ago, generative AI technology was very rudimentary. One time, one of my students came in and showed off how the model was able to make a white circle on a dark background, and we were all really impressed by that at the time. Now you have high definition fakes of Taylor Swift, Barack Obama, the Pope — it’s stunning how far this technology has come. My view is that it may well continue to improve from here,” he said.
Hegde helped lead a research team from NYU Tandon School of Engineering that developed a new approach to combat the growing threat of real-time deepfakes (RTDFs) – sophisticated artificial-intelligence-generated fake audio and video that can convincingly mimic actual people in real-time video and voice calls.
High-profile incidents of deepfake fraud are already occurring, including a recent $25 million scam using fake video, and the need for effective countermeasures is clear.
In two separate papers, research teams show how “challenge-response” techniques can exploit the inherent limitations of current RTDF generation pipelines, causing degradations in the quality of the impersonations that reveal their deception.
“Most people are familiar with CAPTCHA, the online challenge-response that verifies they’re an actual human being. Our approach mirrors that technology, essentially asking questions or making requests that RTDF cannot respond to appropriately,” said Hegde, who led the research on both papers.
Challenge frame of original and deepfake videos. Each row aligns outputs against the same instance of challenge, while each column aligns the same deepfake method. The green bars are a metaphor for the fidelity score, with taller bars suggesting higher fidelity. Missing bars imply the specific deepfake failed to do that specific challenge.NYU Tandon
The video research team created a dataset of 56,247 videos from 47 participants, evaluating challenges such as head movements and deliberately obscuring or covering parts of the face. Human evaluators achieved about 89 percent Area Under the Curve (AUC) score in detecting deepfakes (over 80 percent is considered very good), while machine learning models reached about 73 percent.
“Challenges like quickly moving a hand in front of your face, making dramatic facial expressions, or suddenly changing the lighting are simple for real humans to do, but very difficult for current deepfake systems to replicate convincingly when asked to do so in real-time,” said Hegde.
Audio Challenges for Deepfake Detection
In another paper called “AI-assisted Tagging of Deepfake Audio Calls using Challenge-Response,” researchers created a taxonomy of 22 audio challenges across various categories. Some of the most effective included whispering, speaking with a “cupped” hand over the mouth, talking in a high pitch, pronouncing foreign words, and speaking over background music or speech.
“Even state-of-the-art voice cloning systems struggle to maintain quality when asked to perform these unusual vocal tasks on the fly,” said Hegde. “For instance, whispering or speaking in an unusually high pitch can significantly degrade the quality of audio deepfakes.”
The audio study involved 100 participants and over 1.6 million deepfake audio samples. It employed three detection scenarios: humans alone, AI alone, and a human-AI collaborative approach. Human evaluators achieved about 72 percent accuracy in detecting fakes, while AI alone performed better with 85 percent accuracy.
The collaborative approach, where humans made initial judgments and could revise their decisions after seeing AI predictions, achieved about 83 percent accuracy. This collaborative system also allowed AI to make final calls in cases where humans were uncertain.
“The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time” —Chinmay Hegde, NYU Tandon
The researchers emphasize that their techniques are designed to be practical for real-world use, with most challenges taking only seconds to complete. A typical video challenge might involve a quick hand gesture or facial expression, while an audio challenge could be as simple as whispering a short sentence.
“The key is that these tasks are easy and quick for real people but hard for AI to fake in real-time,” Hegde said. “We can also randomize the challenges and combine multiple tasks for extra security.”
As deepfake technology continues to advance, the researchers plan to refine their challenge sets and explore ways to make detection even more robust. They’re particularly interested in developing “compound” challenges that combine multiple tasks simultaneously.
“Our goal is to give people reliable tools to verify who they’re really talking to online, without disrupting normal conversations,” said Hegde. “As AI gets better at creating fakes, we need to get better at detecting them. These challenge-response systems are a promising step in that direction.”
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Introducing Azi (right), the new desktop robot from Engineered Arts Ltd. Azi and Ameca are having a little chat, demonstrating their wide range of expressive capabilities. Engineered Arts desktop robots feature 32 actuators, 27 for facial control alone, and 5 for the neck. They include AI conversational ability including GPT-4o support which makes them great robotic companions.
Quadruped robots that individual researchers can build by themselves are crucial for expanding the scope of research due to their high scalability and customizability. In this study, we develop a metal quadruped robot MEVIUS, that can be constructed and assembled using only materials ordered through e-commerce. We have considered the minimum set of components required for a quadruped robot, employing metal machining, sheet metal welding, and off-the-shelf components only.
Avian perching maneuvers are one of the most frequent and agile flight scenarios, where highly optimized flight trajectories, produced by rapid wing and tail morphing that generate high angular rates and accelerations, reduce kinetic energy at impact. Here, we use optimal control methods on an avian-inspired drone with morphing wing and tail to test a recent hypothesis derived from perching maneuver experiments of Harris’ hawks that birds minimize the distance flown at high angles of attack to dissipate kinetic energy before impact.
The earliest signs of bearing failures are inaudible to you, but not to Spot . Introducing acoustic vibration sensing—Automate ultrasonic inspections of rotating equipment to keep your factory humming.
The only thing I want to know is whether Spot is programmed to actually do that cute little tilt when using its acoustic sensors.
This paper presents a teleportation system with floating robotic arms that traverse parallel cables to perform long-distance manipulation. The system benefits from the cable-based infrastructure, which is easy to set up and cost-effective with expandable workspace range.
This paper introduces a learning-based low-level controller for quadcopters, which adaptively controls quadcopters with significant variations in mass, size, and actuator capabilities. Our approach leverages a combination of imitation learning and reinforcement learning, creating a fast-adapting and general control framework for quadcopters that eliminates the need for precise model estimation or manual tuning.
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs. In this work, we introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
Researchers at the Max Planck Institute for Intelligent Systems and ETH Zurich have developed a robotic leg with artificial muscles. Inspired by living creatures, it jumps across different terrains in an agile and energy-efficient manner.
ETH Zurich researchers have now developed a fast robotic printing process for earth-based materials that does not require cement. In what is known as “impact printing,” a robot shoots material from above, gradually building a wall. On impact, the parts bond together, and very minimal additives are required.
Using doors is a longstanding challenge in robotics and is of significant practical interest in giving robots greater access to human-centric spaces. The task is challenging due to the need for online adaptation to varying door properties and precise control in manipulating the door panel and navigating through the confined doorway. To address this, we propose a learning-based controller for a legged manipulator to open and traverse through doors.
By patterning liquid metal paste onto a soft sheet of silicone or acrylic foam tape, we developed stretchable versions of conventional rigid circuits (like Arduinos). Our soft circuits can be stretched to over 300% strain (over 4x their length) and are integrated into active soft robots.
NASA’s Curiosity rover is exploring a scientifically exciting area on Mars, but communicating with the mission team on Earth has recently been a challenge due to both the current season and the surrounding terrain. In this Mars Report, Curiosity engineer Reidar Larsen takes you inside the uplink room where the team talks to the rover.
Very often, people ask us what Reachy 2 is capable of, which is why we’re showing you the manipulation possibilities (through teleoperation) of our technology. The robot shown in this video is the Beta version of Reachy 2, our new robot coming very soon!
The Scalable Autonomous Robots (ScalAR) Lab is an interdisciplinary lab focused on fundamental research problems in robotics that lie at the intersection of robotics, nonlinear dynamical systems theory, and uncertainty.
Astorino is a 6-axis educational robot created for practical and affordable teaching of robotics in schools and beyond. It has been created with 3D printing, so it allows for experimentation and the possible addition of parts. With its design and programming, it replicates the actions of #KawasakiRobotics industrial robots, giving students the necessary skills for future work.
Watch the second episode of the ExoMars Rosalind Franklin rover mission—Europe’s ambitious exploration journey to search for past and present signs of life on Mars. The rover will dig, collect, and investigate the chemical composition of material collected by a drill. Rosalind Franklin will be the first rover to reach a depth of up to two meters below the surface, acquiring samples that have been protected from surface radiation and extreme temperatures.
Bouncy’s Ready to Learn Resilience Program is anchored by Breathing Bouncy(TM), a bilingual animatronic service dog. The program uses a multi-sensory approach to help children in grades preK-1, develop positive, secure relationships and build essential social emotional competencies needed for school success.
In addition to being used as part of whole class instruction, Bouncy the Service Dog provides a just-in-time, evidence-based response to support students when they are dysregulated. Unique, proprietary technology differentiates Breathing Bouncy from other edtech solutions in that he breathes at a slowed pediatric rate, allowing children to soothe and self-regulate when they hold him belly-to-belly and feel his chest move in sync with his breathing. While the physical Breathing Bouncy anchors the research-based learning system, the program features both physical and digital elements.
Used across settings, the program includes character-driven apps, music videos, games, interactive books and more to reinforce the relationship and provide differentiated, play-based skill practice and reinforcement.
Many of the elements are available in Spanish and English. In several pilots studies conducted with children identified as chronically disruptive, the program resulted in an increased ability [of children] to slow breathing on demand. Furthermore, those same children were able, in real-time, to transfer self-regulation skills during meltdown situations, which led to a reduction in problem behaviors, freeing up substantially more instruction time for the teacher.
For these reasons and more, Bouncy’s Ready to Learn Resilience Program from Ripple Effects was named “Best Early Childhood Learning Solution” as part of The EdTech Awards from EdTech Digest. Learn more.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDS
At ICRA 2024, in Tokyo last May, we sat down with the director of Shadow Robot, Rich Walker, to talk about the journey toward developing its newest model. Designed for reinforcement learning, the hand is extremely rugged, has three fingers that act like thumbs, and has fingertips that are highly sensitive to touch.
Food Angel is a food delivery robot to help with the problems of food insecurity and homelessness. Utilizing autonomous wheeled robots for this application may seem to be a good approach, especially with a number of successful commercial robotic delivery services. However, besides technical considerations such as range, payload, operation time, autonomy, etc., there are a number of important aspects that still need to be investigated, such as how the general public and the receiving end may feel about using robots for such applications, or human-robot interaction issues such as how to communicate the intent of the robot to the homeless.
The UKRI FLF team RoboHike of UCL Computer Science of the Robot Perception and Learning lab with Forestry England demonstrate the ANYmal robot to help preserve the cultural heritage of an historic mine in the Forest of Dean, Gloucestershire, UK.
This clip is from a reboot of the British TV show “Time Team.” If you’re not already a fan of “Time Team,” let me just say that it is one of the greatest retro reality TV shows ever made, where actual archaeologists wander around the United Kingdom and dig stuff up. If they can find anything. Which they often can’t. And also it has Tony Robinson (from “Blackadder”), who runs everywhere for some reason. Go to Time Team Classics on YouTube for 70+ archived episodes.
UBTECH humanoid robot Walker S Lite is working in Zeekr’s intelligent factory to complete handling tasks at the loading workstation for 21 consecutive days, and assist its employees with logistics work.
Current visual navigation systems often treat the environment as static, lacking the ability to adaptively interact with obstacles. This limitation leads to navigation failure when encountering unavoidable obstructions. In response, we introduce IN-Sight, a novel approach to self-supervised path planning, enabling more effective navigation strategies through interaction with obstacles.
MIT MechE researchers introduce an approach called SimPLE (Simulation to Pick Localize and placE), a method of precise kitting, or pick and place, in which a robot learns to pick, regrasp, and place objects using the object’s computer-aided design (CAD) model, and all without any prior experience or encounters with the specific objects.
Staff, students (and quadruped robots!) from UCL Computer Science wish the Great Britain athletes the best of luck this summer in the Olympic Games & Paralympics.
Walking in tall grass can be hard for robots, because they can’t see the ground that they’re actually stepping on. Here’s a technique to solve that, published in Robotics and Automation Letters last year.
There is no such thing as excess batter on a corn dog, and there is also no such thing as a defective donut. And apparently, making Kool-Aid drink pouches is harder than it looks.
A comprehensive K-12 Project-Based Learning (PBL) solution serving over 120,000 teachers in over 7,500 schools nationwide, Defined Learning engages students in high-quality projects that are based on careers to deepen understanding, enhance engagement, and build necessary future-ready skills.
The platform provides teachers with the essential curriculum and assessment tools they need to engage students in PBL, including a library of standards-aligned projects, career-focused videos, research resources, editable rubrics, and more. Defined Learning’s interdisciplinary projects are based on careers and provide opportunities for students to apply their knowledge and skills to real-world challenges. It excites students about their future and empower them to build the critical future-ready skills they need to succeed in college, careers, and life.
It is the mission of the company to drive student engagement and achievement through real-world PBL. Through career exploration and experiences coupled with hands-on, real-world PBL, they want to ensure that all students have visibility into the limitless world of opportunities ahead of them. Their content leads to enhanced student skills such as: 21st Century & Workplace Skills, College & Career Readiness, Social Emotional Learning & Standards Mastery.
Research by Mida Learning Technologies showed that after utilizing PBL through Defined Learning for one year, teachers saw improvements in students’ engagement and motivation. In addition, students who used Defined Learning outperformed their peers in critical thinking and problem-solving skills.
Defined Learning gives educators all they need to facilitate deeper, career-connected learning through high-quality PBL instruction that engages students and empowers them to build future-ready skills. For these reasons and more, Defined Learning is a Cool Tool Award Winner for “Best Skills Solution” as part of The EdTech Awards 2024 from EdTech Digest. Learn more.
The Mystery Science service is a K-5 ready-to-use multimedia science and STEM curriculum resource used in more than 50% of United States’ elementary schools each month and 3+ million students to help educators turn the conventional approach of answering children’s questions on its head. This simple student-centered approach, and the services’ ease of use, sets a new trend in science education
Discovery Education acquired Mystery Science in 2020 and the service is now available on the Discovery Education K-12 platform. This puts all the tools needed to create engaging digital learning environments at educators’ fingertips.
Mystery Science provides simple-to-use science lessons that inspire students to love science. Each lesson begins with a question that students find interesting, and students explore these questions through interactive videos of fostering a sense of wonder and actively supporting student engagement in accompanying discussion prompts and labs using simple science supplies.
Mystery Science contains everything teachers need to get students engaged in hands-on science and make that subject the best class of the day. For these reasons and more, Mystery Science from Discovery Education is a Cool Tool Award Winner for “Best Science Solution” as part of The EdTech Awards 2024 from EdTech Digest. Learn more.