One of the pillars of representative government in the United States is people’s ability to participate in setting rules and regulations. Regulatory agencies are usually required to solicit comments from various publics (e.g., general citizenry, affected organizations, interest groups) to learn about the potential consequences of proposed rule changes.
To extend participation and reduce costs, the commenting process has been digitized and often takes place through the internet. However, digitization opened the door to opinion spam (e.g., mass, computer-generated, or fraudulent comments) that may undermine the rulemaking process by deceiving agency evaluators and manipulating what citizens’ actual attitudes are regarding proposed regulations. Opinion spam complicates the evaluation that agencies must perform in setting rules and threatens the legitimacy of the rulemaking process in the eyes of stakeholders.
This project investigates ways in which opinion spam might be prevented and provides evidence regarding which techniques are most effective, thereby preserving, or potentially restoring, public trust in digital rulemaking in three phases, this project examines threats to digital rulemaking and tests mitigation approaches to reduce opinion spam.
Phase I includes a series of interviews with submitters of comments, comment evaluators, and scholarly experts on mis/disinformation to gauge how these groups conceive of opinion spam and its prevalence in commenting discourse and to uncover potential interventions that may limit the submission of opinion spam or help agencies detect its submission.
Phase II includes generating machine learning datasets (e.g., legitimate, fictitious, and automated text replacement or text recombination comments) and models for distinguishing fictitious/artificial comments from legitimate comments.
Phase III integrates the findings from the prior phases and includes selecting several viable opinion-spam mitigating strategies and testing their efficacy in randomized, controlled experiments.
This multi-method, interdisciplinary investigation contributes to the theory of coordinated influence campaigns. The project develops a syntax-aware deep learning model for detecting fictitious comments and helps determine which mitigation approaches work better for reducing opinion spam.
NSF Growing Convergence Research (GCR) Project
Over the next five years, the goal of the OU National Science Foundation (NSF) Growing Convergence Research (GCR) project is to assist Oklahoma, a state heavily reliant on gas production, in transitioning to green energy. The project aims to achieve this by using hydrogen (H2) as a fuel, in order to move industrial sectors such as manufacturing and transportation away from carbon-intensive fuels. To learn more, go to oucheps.org/range.