Technology Office Challenges
To foster innovation that may lead to new R&D projects at the Laboratory, the Technology Office designs challenges that inspire multidisciplinary teams to invent creative solutions to emerging problems affecting national security.
Begun in 2010, the challenges have sparked new concepts for improving multisensor acquisition of data for situational awareness, building novel hardware, detecting "fake news," developing fabrics as wearable sensors, and underwater sensing of objects.
The procedure for challenges has remained the same: proposals to answer the challenge are submitted, evaluated, and narrowed down to two or three to fund. Then, these finalists present the systems they developed to solve the challenge, and a winner is declared.
Over the years, the process has evolved into a crowdsourced one enabled by an internal online forum that allows teams to post their proposals for the current challenge and the Laboratory community to comment upon and rate the proposals.
Fool Me Challenge
We are planning a series of challenges to address the threats to national security posed by fake media and influence operations. The first of these is Fool Me, which asks researchers to identify and generate examples of data manipulation that have relevance to national security. The means of manipulation should utilize an emerging technique (e.g., machine learning or generative adversarial networks) and should avoid traditional methods for concealment, deception, or camouflage. These altered data should be scalable so it’s possible to generate a large corpus of data suitable for a future challenge.
In November 2020, two approaches to identifying and generating examples of data manipulation were named winners of this first phase of a multiyear challenge to combat influence operations. The "Foreign Fake Factory: Generation of Fake IO Using Transformer Models" was proposed by Steven Smith, Senior Staff in the AI Software Architectures and Algorithms Group. Olivia Brown, a member of the technical staff in the AI Technology Group, created "Hiding in Plain Sight: Poisoning Attack on Land Use Classification" and is currently working with colleagues to implement the approach for the 2021 phase of Fool Me.
Technology for Advanced Fibers, Fabrics, and Extruded Elements (TAFFEE)
The TAFFEE challenge tasked teams with leveraging the capabilities of the Defense Fabric Discovery Center to develop advanced fiber concepts relevant to the Laboratory mission space. The winning technology was the Wearable Triboelectric Generator, which was designed to harvest the static electricity that accumulates on clothing. Fibers embedded in fabrics would collect the electricity that is generated when natural body motion causes two different materials to rub against each other. This energy harvesting at the fiber scale has the long-term potential to enable integration of low-power sensors and actuators in future functional fabrics.
Special Challenge: Disaster Preparedness
We sought concepts that could help first responders in the immediate aftermath of a disaster when information from reliable sources is crucial but local infrastructure, such as power or cellular service, may be unavailable. Tracking the locations of all rescuers and making that data available in real time allows rescue efforts to save lives. The winner was the Low-cost Localization using Distributed Adaptable-Response Transponders (LLDART) project, which proposed a deployable navigation network of small, low-cost radio transponders carried by rescuers to exchange their location information. By fusing this information with that from traditional sensors, such as an inertial measurement unit and GPS when available, personnel in command posts can estimate the locations of rescuers in the field.
Breaking News or Broken News? A "Fake Media" Hackathon
Unsubstantiated, sometimes intentionally misleading, media content undermines democracy through the erosion of public trust and misunderstanding of important information. When events, actions, and government data are distorted in false reporting, the ability of the public to discern truth becomes more and more strained. To combat this important problem, the Technology Office challenged the Laboratory community to build automatic detectors of unreliable media in its first-ever hackathon in 2017. The challenge was an opportunity for multidisciplinary teams from across the Laboratory to explore machine learning methods for analyzing multimedia data to identify false versus reliable news.
Read the news story on the hackathon ─ “Using machine learning to detect fake news”
Could staff devise an autonomous, coordinated approach to finding black boxes submerged after an airplane crash over the ocean? This was the question posed by the 2016 challenge, the Coordinated Autonomous Novel Underwater Sensing and Exploitation Ability, or CANUSEA. Two finalist teams conducted demonstrations of their concepts at a local lake, and their innovative concepts may lead to new research for Laboratory staff who are exploring strategies for undersea operations.