All predictions

Social Credit 2.0: when your prompts become your credit score

Pending Confidence: 70% Check by: 01.01.2030
technologyaisocietyprivacy
Читать на русском
Mar 2026 PREDICTED Jan 2030 CHECK BY NOW

Imagine: you apply for a mortgage. The bank approves you in four minutes — not because it checked your income and credit history, but because it analyzed eight months of your conversations with an AI assistant. You never once asked “how to file for bankruptcy.” You twice asked for help with budgeting. You didn’t google symptoms of depression. You’re a reliable borrower. Welcome to your new home.

Your neighbor down the hall got rejected. His prompts showed an “unstable behavioral profile.”

You can embellish your resume. Edit your social media. But your prompt history — thousands of questions asked of AI at three in the morning — is impossible to fake. It’s the most honest portrait of a person that has ever existed.


This is already happening — it’s just not called that

The idea of evaluating a person by their digital behavior isn’t new. China launched its Social Credit System in 2014, and by 2025 it covered hundreds of millions of citizens. Alibaba’s Sesame Credit went further: it began scoring based on purchases, contacts, and even what you write in messengers. Bought video games — minus a point. Bought diapers — plus (means you’re a responsible parent). It sounds like satire, but it’s a business model valued in the billions.

1.4B
people within China's social credit system
23M+
flight tickets blocked due to low ratings (Reuters, by 2023)
~2 hrs/day
average time spent interacting with AI assistants (2026)
73%
of US companies already screen candidates' social media

The difference is that Sesame Credit works with indirect signals. Buying diapers is a weak proxy for responsibility. But your conversations with AI — that’s a direct channel to what you think, what you fear, what you dream about, and what you hide.

When you google "how to quit my job" — that's one signal. When you spend an hour discussing your exit strategy with Claude, your fears and financial safety net — that's a complete psychological map.

Why prompts are the perfect profiling tool

Social media is a storefront. LinkedIn is a formal portrait. Even search queries are fragments of thought, two or three words at a time. But a conversation with AI is a different format entirely. People talk to ChatGPT, Claude, and Gemini in ways they don’t talk to anyone: candidly, in detail, unfiltered. Because — well, it’s just a machine, right?

What your prompts say about you

An insurance company won't see from your Instagram profile that you ask AI about chest pains every evening. An employer won't learn from your LinkedIn that you've been asking a neural network to help draft a discrimination lawsuit for three weeks. A landlord won't notice from your Facebook that you're googling "tenant rights during eviction." But all of this is in your prompts. Thousands of hours of unedited truth.

Researchers at Stanford showed in 2025 that 200 prompts can predict a Big Five personality profile more accurately than a close friend can. Not because AI is that smart — but because people don’t pretend in front of a machine.


How it will happen

Nobody will announce: “We’ve created a social credit system based on prompts!” It will happen gradually, through four perfectly legitimate steps.

The path to prompt-based scoring
🔐 Personalization
AI remembers
your preferences
📊 Analytics
company analyzes
patterns
💰 Monetization
insights sold
to third parties
⚖️ Decisions
credit, jobs,
insurance

Step 1: “To improve the service”

You agree to let AI remember the context of your conversations. It’s convenient — no need to explain who you are and what you do every time. But for that, the company stores your conversations. All of them. Forever.

Step 2: “Anonymized analytics”

The company aggregates patterns. It doesn’t read your prompts — heaven forbid — it extracts “behavioral signatures.” Question categories, emotional tone, frequency of requests at three in the morning, lexical diversity. All anonymous. For now.

Step 3: “Partner integrations”

An insurance company offers a 15% discount on your policy if you connect your AI profile. Voluntarily, of course. A bank offers a better loan rate for users with a “verified digital profile.” An HR platform offers “AI candidate scoring” for employers.

Step 4: “Everyone’s doing it”

Five years later, the absence of an AI profile looks just as suspicious as the absence of a credit history. “We were unable to verify your digital profile. Are you sure you want to proceed with the application at a higher rate?”


AI agents make this a hundred times more dangerous

A separate story is AI agents that act on your behalf. Not just answering questions, but booking flights, sending emails, managing finances. An agent integrated with your email, calendar, bank, and medical records isn’t an assistant. It’s a digital twin with full access.

The scale of the problem

When an AI agent interacts with dozens of external services on your behalf, each of those services receives a piece of your behavioral profile. How often you reschedule meetings. Which purchases you cancel. How quickly you reply to emails. This isn't hypothetical — it's the architecture already being built by OpenAI, Google, and Anthropic.

Hacking such an agent isn’t a password leak. It’s a leak of everything: your habits, fears, financial decisions, medical queries, personal conflicts. Every access management mistake is a potential catastrophe. And the more services connected to the agent, the more entry points for attack and the more complete the portrait that can be assembled.


Timeline of the inevitable

2014–2023
China's Social Credit System. Alibaba's Sesame Credit. The world watches and says: "That won't happen here"
2024–2025
US insurance companies begin analyzing fitness tracker data to calculate premiums. FICO explores "alternative data" for credit scoring
2026
First startups offer "AI behavioral scoring" for HR and fintech. For now — on a voluntary basis and using social media data
2027–2028
A major AI provider launches a "verified user profile." Integration with financial and insurance services begins
2029–2030
The absence of AI scoring becomes a de facto penalty: worse loan terms, more expensive insurance, less trust on platforms

Counterarguments

A fair question: won’t regulation stop this? GDPR in Europe, privacy laws in various US states — don’t they protect us?

In short: no. GDPR protects against unauthorized use of data. But if you voluntarily agreed to “enhanced verification” for a discount on insurance — that’s not a violation. The whole trick is in consent. Nobody forces you to share your AI profile. It’s just that without it — more expensive, slower, more suspicious. Like cookie banners: technically you have a choice, but in practice — you don’t.

A historical parallel

In the 1970s, credit bureaus in the US started collecting payment discipline data. Society was outraged — it's surveillance! Congress passed the Fair Credit Reporting Act. And then everyone got used to it. Fifty years later, FICO credit scoring became so normalized that you can't rent an apartment without it. AI scoring will follow the same path: shock → regulation → normalization.

Another argument: “I don’t use AI, I have nothing to worry about.” That’s like saying “I don’t use the internet” in 2005. In five years, not using an AI assistant will be about as realistic as not having a smartphone today. Which means your behavioral profile will exist — the only question is who will be reading it.


The prediction

By 2030, at least one major financial or insurance service in the US, Europe, or China will use data about users' interactions with AI systems (query topics, behavioral patterns, frequency and tone of requests) as a factor in decisions about issuing credit, calculating insurance premiums, or hiring.

In parallel, at least one major AI provider (OpenAI, Google, Anthropic, xAI) will launch a product like a "verified digital profile" — a voluntary system allowing users to share aggregated data about their AI behavior with third parties in exchange for better service terms. This will become the first step toward normalizing prompt-based scoring.

◈ Verification date: January 1, 2030