AI or Not Helps Companies and Users Detect AI-Generated Content Used for Manipulation

Meet Ai Or Not A Leading Ai Detector For Multimedia Content

TL; DR: AI or Not provides an AI detection tool to help individuals and companies identify AI-generated content. Its team leveraged millions of images and file content, both real and AI, to train and build AI models to pinpoint fake audio and images, and the models used to create them. AI or Not is currently working on expanding its solution set with a KYC detector and video checker. We spoke with Anatoly Kvitnitsky, CEO and Founder of AI or Not, about the platform, its use cases, and the future of AI.

The evolution of AI has been quite a whirlwind. In a matter of months, AI-generated content has gone from being rather distinguishable to largely realistic. From ChatGPT’s QA prompts to deepfake media, AI has shown its heavyweight power in the last few years. But what makes it more astonishing (or maybe a little frightening) is that we have only seen the beginning.

Like most technological marvels, AI has its upsides and downsides. Crypto created a new method for money movement but raised the alarm on compliance issues and financial regulations. Smartphones allow us to carry the internet in our pockets but have prompted addictive behaviors in people. AI’s greatest flaw is its ability to manipulate.

Users can generate fake audio, images, and videos with AI. Distinguishing reality from fiction has become impossible as AI technology becomes more sophisticated and powerful. This is why tools like AI detectors are so crucial. This issue is also why Anatoly Kvitnitsky founded AI or Not in 2023.

AI or Not logo
AI or Not offers an AI detector for AI-generated content, including audio and images.

“AI creates a lot of opportunities around creativity, efficiency, and extending the power of what humans can do. But at the same time, it creates a lot of risk of misinformation, fraud, and scams,” said Anatoly.

Anatoly told us companies are accomplishing a lot around generative AI, but many people aren’t focusing on protecting against its risks. AI or Not has created an AI detector to help users identify misinformation, deepfakes, and other untrustworthy AI-generated content. It joins the battle to keep AI ethical and protect businesses from bad actors using AI manipulations.

Train AI Models to Defeat Negative AI Manipulation

Anatoly created AI or Not in the fall of 2023. He and a team of six people, including AI engineers and researchers, developed AI or Not with the mindset to combat the adverse effects of generative AI. AI or Not has caught steam rather quickly, showing that the need for these tools is critical to the future of AI generation.

“We’ve onboarded roughly 130,000 users since then, with a lot of companies across cybersecurity. Use cases vary, but they need to be protected so we all can enjoy the positives of this amazing technology,” said Anatoly.

AI or Not built and trained its own models to detect AI-generated content. It used tens of millions of images and content, both real and AI, to create robust neural networks for its detector. Since AI or Not is not dependent on third-party API tools, it can also identify the exact model used to develop the AI-generated content.

AI or Not scoring interface
AI or Not provides users with an AI confidence score.

“We’re not only able to provide the confidence score of whether something was created using generative AI, but we’re able to identify the exact model that was used to create it, whether it’s Midjourney, DALL-E, OpenAI, or Stable Diffusion,” said Anatoly.

Anatoly told us that AI or Not can only accomplish its detection by building its models from scratch — essentially, AI to detect AI. He also talked about the differences between AI models. Since Stable Diffusion is open-source, it is more accessible and susceptible to bad actors. Midjourney and DALL-E have more safeguards and also look more realistic.

“Open source moves technology forward much faster than we can ever imagine if we’re building things in-house. However, in the case of AI, there are no watermarks. And if there were, a developer could remove them and create really bad things around them,” said Anatoly.

Most Popular Use Cases

Many use cases exist for AI detection tools. Companies across various industries can find value in harnessing AI or Not to identify AI-generated content. According to Anatoly, some of the top sectors are music, financial services, and cybersecurity.

“Some of the most successful use cases we’ve seen are around streaming and artists’ brands because they’re essentially brands to protect. It would be wrong for an artist’s brand to be ruined for something that they never did because it sounds like them,” said Anatoly.

Anatoly told us AI detection for audio is currently the most complex problem to solve because of the processing of audio files. AI or Not can fingerprint an audio file, but the processing medium can raise challenges. It can detect AI-generated audio more easily if it has the original file.

AI or Not homepage screenshot
Users can upload audio and images to detect AI in content.

“We’re continually researching from a model-building perspective of how we can continue to do some of the harder challenges around phone calls and fraudulent things. So that’s the next level of AI detection that we’re working on,” said Anatoly.

AI manipulation is also a significant threat to financial institutions, especially concerning KYC verification. The KYC process involves verifying a client’s identity before opening a financial account. AI allows bad actors to tamper with image content to create fraudulent identities. AI or Not seeks to detect this tampering before it can cause any damage.

“We’re working on this problem to ensure that the people being onboarded to whatever tool it is are indeed real and to determine whether the image or content has been tampered with. We’re spending a lot of time training our models for these issues,” said Anatoly.

The Future of AI-Generated Content

AI has opened up a world of new possibilities for users. But its negative impacts have caused great concern. Aspects such as automated social engineering attacks, misinformation, and financial scams can overshadow much of what AI can accomplish in a positive way. The AI or Not team aims to keep AI on the right side of the coin.

We spoke with Anatoly about where he sees AI going forward. “The pace of the AI world has never been faster. For example, 12 months ago, if you looked at an AI-generated image, you would have brushed it off as fake. Now, you can’t can’t tell the difference,” said Anatoly.

AI or Not video checker
AI or Not will soon release a video AI checker.

AI-generated content, whether images or audio, has become increasingly indistinguishable from reality. In early 2024, AI-generated audio was a big innovation. Users only need a few seconds of audio to create an audio file of someone saying anything they want. Anatoly believes video will experience the same transformation in 2024.

“If we can judge by the pace of the previous two modalities of image, followed by audio, next is video, it’s just a matter of time. Video will have the biggest breakthrough. Image and audio will continue to advance, but they’re already far along. Video will be indistinguishable soon,” said Anatoly.

Companies will also begin to consider AI manipulation as a new vector of cybersecurity. Anatoly said he thinks the breaking point will be the election season. Once AI is used for misinformation on a large scale, it will make users and companies open their eyes to its effects.

“Everyone, whether as an individual consumer or a business, will start thinking about where AI can go wrong for me. We have to be ready as an AI detection company to make sure that we’re able to provide that service for people as well as companies. That will be our road map for the next six months,” said Anatoly.