IT

29-01-2026

THE DIGITAL FRONT 2026
A Survival Guide in the Algorithm War

 

Invisible, fast, and everywhere
Today, cybersecurity has ceased to be a “wall” and has become a battle between shadows. Whether it is the CEO of a multinational corporation or someone who simply wants to pay for a service through an app, data is the conquered territory in a war where the soldiers are lines of code.

  1. In the Enterprise: The chess game of “Models vs. Models”
    Corporations no longer fight only against viruses; they fight against infiltrated AIs.

The Attack: Malicious AIs “study” corporate culture. They learn, for example, how the finance manager writes and, at the right moment, generate an email, a voice note, and a Teams video requesting an urgent transfer—what we call a deepfake.

The Defense: Companies deploy Sentinel AIs. These do not look for viruses; they look for “micro-anomalies.” If the manager’s tone of voice varies by 1% in frequency, or if an email was drafted in 0.002 seconds (impossible for a human), the defensive AI cuts the communication.

  1. In your day-to-day life: The user as “collateral damage”
    For the average user, the algorithm war is not science fiction—it is everyday reality:
  • Hyper-personalized scams: For example, it will become increasingly common to receive messages from acquaintances or family members, with their real voice generated by AI, saying they lost their phone and need money.
  • The deception algorithm: Attackers use AIs to bypass Gmail or Outlook spam filters by creating millions of variations of the same message until they find the “key” the protection algorithm does not detect.

Key Data 2026:

  • Nearly 65% of internet traffic today is non-human—algorithms verifying service or attack algorithms.
  • In the financial sector, 90% of digital interactions no longer involve humans; they are security algorithms invisibly validating processes.

Situation

What you see

What is really happening

Mobile Banking

The app asks for an unusual “facial verification.”

Your banking AI detected a suspicious access pattern and is challenging a potential bot.

Social Media

An ad seems to read your mind mysteriously.

A psychological attack algorithm has profiled your vulnerabilities to sell you something.

Calls

Absolute silence when you answer, then they hang up.

An AI is “mapping” your voice to clone it in the future.

Job Search

Your CV is rejected in 3 seconds.

A screening algorithm detected that you did not use the “buzzwords” another AI programmed to filter humans.

Work Email

An email from HR asking you to update payroll details.

An automated Spear Phishing attack (social engineering cyberattack) that analyzed your latest LinkedIn posts to gain your trust.

Survival Tips

  • Establish a family “keyword”: In 2026, voice and video can be forged (deepfakes). Have a secret word with your family for real emergencies. If they don’t know the word, it’s an AI.
  • Be wary of urgency: Attack algorithms rely on stress. If something is “urgent” and requires money or data, do not respond. In 90% of cases, pausing breaks the attack cycle.
  • Verify the source: Before accepting a new colleague on LinkedIn or an internal app, use the rule of three channels. Confirm their identity through a different channel than the one used to contact you.
  • Use AI to defend yourself: Install browser extensions and security apps that use local AI. They analyze websites before you click, detecting traps the human eye can no longer see.

Chrome’s “Native Shield”
Google has updated its Safe Browsing feature with AI that blocks scam sites even before they become widespread.

To enable it, follow these simple steps:

  • Go to Settings (three dots in the upper right).
  • Open Privacy and security > Security.
  • Select Enhanced protection.

What does it do? Chrome uses real-time AI models to analyze whether a “technical support” or “banking” website is fake, blocking it with a red warning screen before you enter your data.

Microsoft Edge: its own built-in AI version
Edge does not use Google Safe Browsing directly; instead, it uses Microsoft Defender SmartScreen.

Does it have AI? Yes. Microsoft recently added an AI-powered Scareware Blocker. This feature detects sites that try to frighten you—such as fake “Your PC has a virus!” messages—by analyzing page behavior in real time.

To enable it, follow these simple steps:

  1. Go to Settings (three dots in the upper right).
  2. Open Privacy, search, and services > Security.
  3. Enable Microsoft Defender SmartScreen.
  4. Activate the option called Scareware Blocker (or “intimidation software blocker”) to gain extra protection against visual scams.

Safari: Uses Google’s base, but with limits
Apple uses Google Safe Browsing’s database to protect users; however, for privacy reasons, Safari does not send all your data to Google.

Does it have Google’s new AI? Not exactly. Safari receives updated lists of dangerous sites from Google, but it does not include Chrome’s “Enhanced Protection” features that analyze page content with AI in real time. Safari prioritizes privacy, so its protection is more “passive.”

To enable it:

  • On Mac: Go to Safari > Settings > Security and check “Warn when visiting a fraudulent website.”
  • On iPhone/iPad: Go to Settings > Safari and enable “Fraudulent Website Warning.”

If you have any questions regarding this topic, please do not hesitate to contact us by phone at 7078 8001 or by email at it@jebsen.com.ar.

Laura Borroni

IT

IT

January 2026

 

This newsletter has been prepared by Jebsen & Co. for the information of clients and friends. Although it has been prepared with the greatest care and professional zeal, Jebsen & Co. does not assume responsibility for any inaccuracies that this bulletin may present.