Although companies have created detectors to help spot deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.
My team and I discovered new methods that improve both the fairness and the accuracy of the algorithms used to detect deepfakes.
To do so, we used a large dataset of facial forgeries that lets researchers like us train our deep-learning approaches. We built our work around the state-of-the-art Xception detection algorithm, which is a widely used foundation for deepfake detection systems and can detect deepfakes with an accuracy of 91.5%.
One was focused on making the algorithm more aware of demographic diversity by labeling datasets by gender and race to minimize errors among underrepresented groups.
The other aimed to improve fairness without relying on demographic labels by focusing instead on features not visible to the human eye.
It turns out the first method worked best. It increased accuracy rates from the 91.5% baseline to 94.17%, which was a bigger increase than our second method as well as several others we tested. Moreover, it increased accuracy while enhancing fairness, which was our main focus.
We believe fairness and accuracy are crucial if the public is to accept artificial intelligence technology. When large language models like ChatGPT “hallucinate,” they can perpetuate erroneous information. This affects public trust and safety.
Likewise, deepfake images and videos can undermine the adoption of AI if they cannot be quickly and accurately detected. Improving the fairness of these detection algorithms so that certain demographic groups aren’t disproportionately harmed by them is a key aspect to this.
Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.
A new poll finds that as the United States rapidly builds massive data centers for the development of artificial intelligence, many Americans are concerned about the environmental impact.
Brain.fm merges music and neuroscience to enhance focus, creativity, and mental health—Dr. Kevin Woods reveals how sound is transforming cognitive performance.
An internet outage on Monday morning highlights the reliance on Amazon's cloud services. This incident reveals vulnerabilities in the concentrated system. Cloud computing allows companies to rent Amazon's infrastructure instead of building their own. Amazon leads the market, followed by Google and Microsoft. The outage originated in Northern Virginia, the biggest and oldest cloud hub in the U.S. This region handles significantly more data than other hubs. Despite the idea of spreading workloads, many rely on this single hub. The demand for computing power, especially for AI, is driving a construction boom for data centers.
Ashley Fieglein Johnson, CFO & President at Planet, joins us to share the story behind the Owl launch—and how strategy, tech, and vision are fueling liftoff.
OpenAI has announced that ChatGPT will soon engage in "erotica for verified adults." CEO Sam Altman says the company aims to allow more user freedom for adults while setting limits for teens. OpenAI isn't the first to explore sexualized AI, but previous attempts have faced legal and societal challenges. Altman believes OpenAI isn't the "moral police" and wants to differentiate content similar to how Hollywood differentiates R-rated movies. This move could help OpenAI, which is losing money, turn a profit. However, experts express concerns about the impact on real-world relationships and the potential for misuse.
Ten philanthropic foundations are committing $500 million across the next five years to place human interests at the forefront of artificial intelligence's rapid integration into daily life.
Jesse Pickard, CEO of The Mind Company, shares how Elevate and Balance are redefining mental fitness with science-backed tools for brainpower and wellness.