After spending 35 years in media relations and emergency management for national and state criminal justice agencies, I understand how easy it is to distribute false information and how difficult it is to combat it. It doesn’t take a lot of technical knowledge to spread massive amounts of fear via social media.
Artificial Intelligence makes it easier, but when I managed and taught emergency management, I emphasized that anyone with little experience or technical knowledge can create very realistic television news programs or radio shows and photographs, post them on social media, and scare the hell out of the population.
There are attention-seeking individuals who do it anonymously for personal gratification. There are members of organized crime or cartels who do it to influence public opinion and demonstrate their power.
That’s what’s happened in Mexico. It could happen in the United States.
If the Mexican cartels ever established a significant foothold in the United States, the same thing could or would happen here. Millions of illegal immigrants have come from the countries mentioned below, and many (40 to 70 percent, according to media reports) have criminal histories. There’s no doubt that some are cartel members or belong to organized crime groups.
Questions
What happens if justice agencies attack or try to apprehend cartel leadership in the United States? Do southern states (or the rest of the country) have the capacity to deal with a massive cartel disinformation campaign? Cartels have endless amounts of money and the capacity to quickly create social media posts that mimic television and radio shows and photos. They have a core of tech-savvy individuals.
The average citizen won’t be able to tell the difference between cartel media and legitimate news sources. Yes, social media sites will take them down, but they will be immediately replaced by new disinformation using multiple servers in different locations through the dark web.
If the United States were to intervene directly in cartel activities in Mexico, they would retaliate. Unquestionably, that retaliation would involve a disinformation campaign that would scare the American public and consequently, the economic prosperity of the area affected.
An Example-“But Online, Things Looked Even Worse”
Reuters: After Mexican forces killed the country’s most wanted cartel leader on Sunday, false accounts of spectacular violence swept across social media, fueled by what researchers say was a coordinated propaganda campaign by organized crime.
Unrest did indeed break out in many parts of Mexico as loyalists to El Mencho, the leader of the Jalisco New Generation Cartel, set up roadblocks, torched buses and stores, and attacked gas stations in retaliation for his slaying.
But online, things looked even worse (emphasis added). Among the false reports: The Guadalajara airport was taken over by assassins. A plane on the runway was on fire. Smoke was billowing from a church and multiple buildings in the city of Puerto Vallarta, popular with tourists.
These images, which were reviewed by Reuters, were false but shared tens of thousands of times. Misinformation routinely proliferates after major news events, particularly since the advent of artificial intelligence.
Experts said that, in the case of El Mencho’s killing, the fake news was being spread at a surprising speed not only by unsuspecting users but also in some cases by the cartel itself, in efforts to make its retaliatory wave of violence appear greater and more terrifying than it really was.
Mexico
I’m not sure the average American understands the level of violence and organized crime in Mexico, Central America, and some South American Countries. While some security analysts argue cartels control up to 30-40% of Mexico’s territory, this is a contested, non-official, and evolving figure based on varying definitions of control.
The countries mentioned have much higher rates of violence than the United States. Per Google AI, the U.S. military and DEA have estimated that cartels control approximately 30% to 35% (roughly one-third) of Mexico’s territory.
Other security analysts and reports have provided broader ranges, with some estimates placing cartel influence as low as 20% and others as high as 80% of the country. A 2023 study published in Science identified cartels as the fifth-largest employer in Mexico, with an estimated 160,000 to 185,000 active members.
According to Google AI, as of July 2025, there are over 133,215 officially registered cases of disappeared or missing persons, with critics suggesting the number may be higher due to underreporting.
Disappearances have increased by over 200% over the past 10 years, with a 16% increase observed in the first year of President Sheinbaum’s term (Oct 2024–Sept 2025) compared to the previous year.
Organized crime, particularly cartel competition for territory and illegal markets, is the primary driver. Around 93% of crimes are not reported, or if reported, are not investigated, leading to near-total impunity.
Only 2–6 percent of disappearance cases are brought before courts. While national homicide rates saw a slight decline to 24.9 per 100,000 in 2023, they remain extremely high, with many, if not most, violent deaths linked to organized crime.
Wall Street Journal: For many drug-enforcement officials in North America, there was one cartel boss who was too big and too dangerous to ever try to take down—Nemesio “El Mencho” Oseguera, the head of the Jalisco New Generation Cartel.
Now that Oseguera is dead, after a firefight Sunday with Mexican security forces, Mexico is bracing for a civil war among his top lieutenants for control of a cartel that quickly rose to be the country’s most powerful and deadly organized-crime syndicate.
Mexico is already struggling with another cartel civil war in the state of Sinaloa, where clans have been fighting for more than a year after one faction affiliated with former drug boss Joaquin “El Chapo” Guzman betrayed the head of another family clan. That conflict has left more than 2,000 people dead and 3,000 missing, likely kidnapped and killed.
Dealing With AI Deep Fakes And Rumor Control
This article started with a media request regarding the ability of law enforcement or government to identify artificial intelligence deep fake photos, video, or audio. I reminded her that deep fakes have been with us for decades.
I have multiple years of experience directing public information for national and state criminal justice agencies, including law enforcement. The Maryland Emergency Management Agency was part of the Maryland Department of Public Safety when I was their director of public information.
The first thing to understand is that I or anyone can create green screen videos using a very realistic digital television news set, import commercially available videos, and put out disinformation in about twenty minutes. It’s easier to create false audio. Anyone can create AI-generated photos, but the easily accessible commercial tools can do it better. They could be sent to social media from platforms and IP addresses throughout the world via the dark web.
So, to understand misinformation is to acknowledge that it can be created faster and better than anyone can imagine. It’s equally important to understand that if I can do it, what happens if cartels with endless budgets and the dark web start engaging in the same process?
What happens if hundreds of thousands of Phoenix area citizens are told that their water has been purposely contaminated? What happens when the same campaign tells El Paso residents that their food supply has been compromised? All you would have to do is use chemical agents in small areas to create real-world examples that add credibility to the disinformation campaign.
The cartels have access to a multitude of weaponized drones that they use against each other as well as Mexican law enforcement. It’s not beyond the realm of possibility that they could be used against American cities.
Yes, AI-generated media makes the government’s job of establishing the validity of any media placed online harder, but every state has the capacity (or should have the capacity) to accurately gauge what’s offered via social media and respond with corrected information.
The primary question is whether law enforcement or the government has the capacity to respond to hundreds of false social media reports in a short amount of time.
The issue isn’t AI, although that makes the mission of providing accurate information more challenging.
There have always been obstacles in providing accurate information to the public. Photoshop images, the ability to buy commercially available video footage, and the use of green screen technology have been with us for decades.
For example, if the issue is the stability of flood waters breaching a dam, sending a video simulating the dam bursting via social media can create immense panic.
I have the tools to create a very realistic video using a news set with footage instantly available from commercial sites. I can create massive panic.
The topic or situation doesn’t matter; if I want to create a disturbance or induce hysteria, I can do it.
The Real Issue Is Rumor Control
Response isn’t principally digital. It requires an immense amount of person-power. How do you ensure that the dam isn’t breaking? Have someone stationed at the dam with the authority to interact with management. That takes a lot of planning and coordination.
This is why the public affairs section of any police, criminal justice, or emergency management agency needs to be prepared. It requires a vast undertaking of public information officers from a variety of agencies to address, validate, or dispel rumors, false videos, audio, or fake photos. It takes experts from a variety of agencies to do this. Every state needs to scramble a multitude of people quickly, doing 12-hour shifts. They have to have the tools to communicate.
As the director of public information for the Maryland Department of Public Safety and the Maryland Emergency Management Agency, we called FEMA to evaluate practice sessions focusing on nuclear power plant meltdowns or chemical weapons drills to make sure that the right information, equipment, and personnel were available.
But the event didn’t matter; the state offered its resources to any requesting agency, and then I called up to 30 experts from a variety of agencies to serve as my rumor control team. This had to operate 24 hours a day until the event passed.
There was one person in charge whose primary job was to insist on verification of any information.
We monitored the news media and social media for any sign of false information or suspicious media being released.
Once verified, I, as the primary spokesperson, would release that information to the public after consultation with the agency involved. There was one spokesperson, me. Multiple spokespeople would inevitably offer misinformation.
We would also send public information officers or agency personnel to the scene of any emergency, so we would have people who could verify information immediately.
Again, there is a process that exists in every state to track down false or misleading information. Yes, AI makes it more challenging, but we had people trained on verifying false images, misleading videos, or audio.
The media becomes your partner. You can’t operate an anti-disinformation team without them. They have to know the harm created by releasing suspicious materials. They must be willing to come to you first before releasing information. Establishing good media relations beforehand is a wise move.
The media MUST have quick access to your spokesperson. There MUST be people assigned to the primary spokesperson to catalog and prioritize incoming media calls. Using the Associated Press as your first response mechanism will instantly get the word out to all others.
Conclusion
Few in the public understand what states do to verify information and the process of getting facts out. We know the reporters and how to reach them through the Associated Press or direct contacts.
Any agency in the country has access to its state emergency management function and the resources available to it.
In Maryland, we trained the team on a twice-a-year basis, and the primary spokespeople were often FEMA-trained. Within my department, we trained our part-time spokespeople to address emergencies within our 12 departments, but we also used our part-time PIOs to assist other agencies.
It’s all part of a very precise plan to put the right people in the right place to make quick judgments that are in the public’s best interest. It’s more than possible to discover false AI-generated products principally through our network of experts or by having people stationed at the scene.
The video or photo of a major dam bursting doesn’t do sustained damage if you have personnel there who can verify that it’s fake.
It’s the same with AI versions of any media.

Comments