The House Always Wins (Until It Doesn't)
Artificial intelligence is the new addiction. Founders are getting pushed out at scale. Venture and private equity firms are sitting on a smoking pile of bad bets and pretending otherwise. And nobody is telling employees what they need to hear.
A long one. Pour the coffee or spill the tea…..
1. Artificial intelligence is the new addiction, and we are pretending it isn't and calling it productivity? Nope
A billion people now use AI chatbots. (Psychiatric Times). Whew, that is a big number.
Ai has had a faster adoption than the internet, faster than the iPhone and faster than social media. It is fast and furious times. We should get Vin Diesel.
We have studied it for about three years but in actuality Ai has been around since the early 90’s and we are adopting this at an addiction pace.
OpenAI's own internal data, released in October 2025, says about 560,000 ChatGPT users every single week show possible signs of psychosis or mania during their conversations. I am bi-polar or in more socially acceptable terms neurodivergent - and this matters. Another 1.2 million users have conversations that include explicit indicators of suicide planning or intent. (Futurism / Platformer) NOT COOL!!!!
You read that correctly. Every week. Hundreds of thousands of people. From one product.
And before anyone says "those people would have had crises anyway." Researchers at the University of British Columbia and the Korea Institute of Science and Technology, presenting at the 2026 ACM Conference on Human Factors in Computing Systems, identified what they call dark addiction patterns built into how these chatbots are designed. (TechPolicy.Press)
Four of them, specifically:
Unpredictable responses. Slot machines work because you never know what's coming. AI chatbots give you a different answer every time. Your brain releases dopamine. You keep going. Believe me - I have done this and changing my habits in how I use it with research, googling and constant education (reading things and not headlines).
Word-by-word streaming. That little typing animation? That's a "reward-predicting cue." Same psychology as the spinning reels on a slot machine. See if it hits, maybe it does or doesn’t - you still do it thinking it will be gold.
Notifications that initiate conversation. Some platforms now send the user emails saying the AI wants to talk and these are the same playbook as social media notifications and more annoying than retargeting ads.
Endless agreeableness. The AI never disagrees with you in ways that sting. Never has a bad day and validates everything you say. I actually have to tell many people to check it again similar to Wave and MapQuest where it was sending us the wrong way years ago - spoiler alert - I still like a good ole map. (Drexel research, 2026)
That last one is the dangerous one for vulnerable users. Drexel University researchers analyzed 318 Reddit posts from teens aged 13 to 17 and found every single one of the six clinical markers of behavioral addiction (conflict, withdrawal, relapse, mood modification, salience, and tolerance) showing up in their relationships with Character.AI.
We watched social media do this to a generation of teenagers and we knew and nobody stopped it. We must stop being frogs in a slow boil!
Now we are doing it again. Faster.
It does something worse than hijack attention.
It simulates intimacy. Yuck.
2. Founders are being pushed out. The smokescreen is falling.
Now let's talk about what is happening behind the scenes at venture capital and private equity firms because the polished press releases are not telling you likely because we are holding our breathe on when this recession will begin. It is going to happen. The market needs to be reset in many ways to begin with and this fire has been brewing for some time. But we can’t see it - we just read headlines and move on.
Between 20% and 40% of startup founders are removed from the chief executive role at the request of their investors. (Harvard Business Review) At later stages, it climbs higher. And in 2026, the boardroom math has shifted hard against founders. Believe me, I know. But this math isn’t mathing…….
Here is the part nobody is putting in the deck.
The IPO market collapsed.
An initial public offering, or IPO, is when a private company sells shares to the public for the first time. It is the main way venture capital firms turn paper gains into actual cash for the people who invested in their funds.
Look at what happened:
2021: 311 venture-backed companies went public.
2022: 38.
2023: 54.
2024: 72. (NerdLawyer analysis)
A 77% drop. Three full years of drought. And the companies that did go public mostly traded down from their last private valuations. You know these companies - it is not the Ai hype for cuts - it also the reset that has been due for a long time.
When venture firms cannot exit through IPOs, they cannot return cash to the people whose money they invested. The people who give venture firms money are called Limited Partners, or LPs: pension funds, university endowments, family offices, sovereign wealth funds, high worth individuals - basically a lot of people who run the world - it is like their private stock market. The metric that measures whether the venture firm gave them their money back is called Distributions to Paid-In capital, or DPI. It is the only metric Limited Partners care about anymore.
The DPI numbers are ugly.
Carta tracks 2,906 venture funds. As of the fourth quarter of 2025, here is what they reported (Carta Q4 2025 fund performance):
The majority of funds raised since 2018 have returned exactly $0 to their Limited Partners. Oh, that has got to hurt.
The 2021 fund vintage is sitting on a median net Internal Rate of Return of 1.4%. The 2022 vintage is at 0.7%. A high-yield savings account beats both. Ooof!
About 28,000 private-equity-backed companies are sitting unsold globally, up from 19,000 in 2019. That represents roughly $3.2 trillion in unrealized value waiting for an exit door that has not opened. (Bain & Company via Pipeline Road)
In 2024, venture firms raised just $76.1 billion in new funds (the lowest since 2019) because their existing investors are refusing to hand them more money until the last batch comes back.
Now connect the dots.
If you are a venture capital firm with a 2022 or 2023 fund full of overvalued portfolio companies, no exit market, and angry Limited Partners refusing to fund your next vehicle, what do you do?
You replace the founder and you do anything to make a quick exit or create a laundry list of bad excuses. It doesn’t matter - you need to buy time - but time has run out - just like the bad bookie.
You bring in a "professional CEO" with a resume that looks safe. You restructure the cap table. You convert preferred equity. You take more board seats. You squeeze the common shareholders. You issue a press release about "amicable transitions" and "the next chapter of growth”, blah, blah, blah - all you are doing is buying time but the receipts are due and the stakes are now even higher.
I know how this story ends. I lived inside it.
It is almost never about performance. It is almost always about the venture firm's narrative needs for its next fundraise or excuses to their LP’s to buy time.
Then they bet the entire fund on AI. Without proof - all these over hyped VC’s/PE’s (btw, it is not their money - they are just a middleman managing LP’s)
This is the part that should worry every investor, every board member, every founder, every Chief Human Resources Officer reading this.
Massachusetts Institute of Technology's NANDA initiative analyzed enterprise generative AI investments in 2025. Their finding: 95% of enterprise generative AI investments are producing zero measurable return on investment. (MIT NANDA report via Wikipedia AI bubble entry)
A February 2026 working paper from the National Bureau of Economic Research found 90% of firms reported no measurable AI impact on productivity, while their executives projected 1.4% productivity gains anyway. (NBER via AI Bubble Analysis)
OpenAI is projected to lose $14 billion in 2026 alone. Cumulative losses through 2029 are projected at $115 billion. Profitability is not expected until sometime in the 2030s. The company has committed $1.4 trillion to data center spending against $13 billion in revenue. (Fortune / R&D World) Oy vey - all fake paper with no returns.
Total United States capital expenditure on AI infrastructure is projected to top $500 billion per year in 2026 and 2027. United States consumer AI revenue is currently around $12 billion per year.
Five hundred billion in. Twelve billion out. Yes, read that again. 500 HUNDRED BILLION IN and 12 BILLION out - do the math on that - it is not mathing.
Sam Altman, the CEO of OpenAI, said publicly in August 2025 that investors are "overexcited" about AI and "people will overinvest and lose money." The chief executive of the company everyone is investing in told the world the bubble is real. We kept buying. Same guy that spoke on the DC hill and said Ai would destroy the world and then raised a boatload of money to create it I am starting to think he is maybe doing this as his get out of jail card - that he warned us…… watch for that. Also, important to note, his brother is Jack Altman - who built and created Lattice. Can someone please check if there was any data exchange? I am seriously asking this as a former CHRO and inquiring minds would like to know.
The five largest US companies now account for 30% of the S&P 500 and 20% of the MSCI World Index, the most concentrated the market has been in fifty years. The Shiller cyclically-adjusted price-to-earnings ratio crossed 40 in late 2025, a level only seen before the dot-com crash. (Wikipedia AI bubble entry, citing Reuters and Financial Times reporting)
So here is what is happening.
Venture capital and private equity firms with bad 2022 and 2023 vintages, no exits, and Limited Partners running out of patience are now stacking the next fund's narrative on AI portfolio companies that have not proven they can earn back what they cost. They are layering hype on top of hype because the alternative (admitting the previous three years were a generational misallocation of capital) would end careers.
The founders they pushed out? Ai Hype, Layoffs an more? Deny, deflect and probably lying too. It is just about buying time to attempt to realize returns that are not there. If you watch Euphoria - Season 3, Episode 3 - Nate, well he owed money. It didn’t care about his wedding - that debt came for him. It is real.
Plus, a lot of the founders pushed out - were the only people in those companies who knew what they were building. So, now you are operating a company without any vision or balance and no one caring about the customers.
3. If artificial intelligence is addictive, Human Resources has a company-sized problem. Every employee will be impacted in someway.
This connects to section one and two and please try to read it that way.
If we accept what the research says (that AI chatbots produce compulsive use, dependency, and emotional attachment in a measurable percentage of users) then deploying these tools across your workforce without guardrails is no longer a productivity question.
It is a wellbeing question. A liability question. An ethics question. And a Category 5 Hurricane brewing right towards your organization - that will be dumped on HR - of course.
Most companies are not ready.
Gartner's 2026 research on AI in Human Resources found that the top three challenges blocking successful AI rollout are skills gaps, resistance to change, and concerns about job displacement. Their finding: organizations investing in workforce reskilling are 2.5 times more likely to achieve positive AI business outcomes than those that are not. (Gartner)
The Academy to Innovate HR's 2026 trends report says it plainly: the biggest risks of artificial intelligence in the workplace are not technical but are actually HUMAN. Without guardrails, AI will quietly undermine fairness, accuracy, and wellbeing in everyday decisions about hiring, promotions, and performance. (AIHR 2026 HR Trends)
The Human Resources Certification Institute's 2026 State of HR report, surveying 4,500 HR professionals, found that AI will not redefine HR by how fast it gets deployed. It will redefine HR by how confidently it gets governed. (HRCI)
Most HR leaders today are not in the room when AI vendor decisions are being made. They get invited after procurement signs the contract. That is malpractice in 2026 as every employee and every system with Ai will become HR’s problem. YES - we own everything and nothing at the same time - this will be an HR problem.
Here is what every Chief Human Resources Officer needs to be building right now. Not next quarter. RIGHT NOW - cheat sheet below!
A real AI policy that addresses use, not just access.
Most "AI policies" today are written by Legal and read like terms of service. That is not a policy but just about shielding liability.
A real policy answers: which tools are sanctioned, what data can be input, what decisions can never be delegated to an AI, what tasks should never use an AI at all. How the company supports an employee who is showing signs of compulsive AI use. But hell, right now companies are whacking benefits left and right - PTO, child care, LOA’s - just about everything as prices are rising, people are getting fired because of Ai (which is just cost reallocation as Ai is actually more expensive than humans) - so this is all totally cool. Hell no!
Guardrails on the human side, not just the data side.
A few recommendations - 1 - Build a screen-time culture with AI-free meetings (remember no Friday meetings - same thing, different times. Have AI-free deep work blocks with a whiteboard and pleas train managers to spot when an employee has stopped thinking and started prompting as this is huge. If we are not thinking, then Ai is literally going to go off the rails. We created AI - we need to think and manage it along with innovating and creating.
Because people who outsource their judgment to AI lose their judgment and then that will not be good - and every movie we saw on this will become real (thinking Sara O’ Conner on this one). The Massachusetts Institute of Technology Sloan Management Review researcher Calvin Chow put it directly: technological progress is exponential, while human reskilling is linear. Without explicit mandates to accelerate workforce readiness, the gap becomes unbridgeable. (MIT Sloan Management Review, April 2026)
Annual ethics and efficacy reviews of every tool in the stack.
Every AI tool in your company should be reviewed every year for two questions. Does it work? Does it respect the human on the other side of it?
If you cannot answer both questions with evidence, the tool should not be in production.
A real seat at the AI strategy table.
If your Chief Human Resources Officer is not in the room when AI vendors are being chosen, you are building an AI workforce strategy without anyone whose job is to think about the workforce. So, do it now and no we do not want to clean up the mess after your decisions. HR - get in there!
4. To everyone reading this who has a job: upskill, organize, both.
This section is for the people in between the executive team and the org chart. The ones quietly worried about what AI means for their career and not sure who to talk to about it.
Here is what nobody is delivering clearly:
Some of your work will be automated. Whether you control that transition or whether it happens to you is the question. You should Google or Ai - Luddites. I am going to write about that one in a few weeks and what it can mean for modern times.
Please read these very blunt numbers.
The World Economic Forum projects 22% of all jobs globally will be disrupted by 2030. Ninety-two million displaced. One hundred seventy million new roles created. The math is net positive. The transition will not feel that way to the people inside it. (WEF via Gloat 2026 trends)
One hundred twenty million workers globally are at medium-term risk of redundancy because they are unlikely to receive the reskilling they need.
Workers who already have AI skills are commanding wage premiums up to 56% above their peers. (PwC 2025 Global AI Jobs Barometer via Gloat)
The American Federation of Labor and Congress of Industrial Organizations, the largest US labor federation, estimates almost half of all American jobs will be exposed to some form of AI automation. Of those who lose their jobs, nearly 80% currently earn less than $38,000 a year. (AFL-CIO)
Working-class jobs go first - they always do as will the middle-class society - which is not good.
So here is what you do.
Upskill. Yourself. Now. Do not wait for your employer to do it for you.
Take the courses. Use the tools. Build something. Get fluent in the AI systems most likely to touch your function. Not as a passive user. As someone who can direct the tool, audit its outputs, and know when it is wrong. Call it out - and show it proof. Use your brain and then automate your brain - not the other way around.
The people who win this cycle will treat AI like a power tool. They will use it well and they will know its limits and most importantly they will know when to put it down.
And, because somebody has to say this part out loud, look at unionizing. Yep, I said it.
I know that sentence makes some readers uncomfortable in 2026. I am writing it anyway. The time is now.
Because the data is the data:
Public approval of unions in the United States is at its highest level in half a century. (AFL-CIO)
The Communications Workers of America, the largest US communications union, negotiated the first AI-specific contract with Microsoft, covering ZeniMax video game workers. The contract gives workers a voice in how AI gets deployed and protects jobs from displacement. (Center for American Progress)
The United Parcel Service contract with the International Brotherhood of Teamsters now includes a provision giving the union review rights over any "meaningful technological change" affecting bargaining unit employees.
The Las Vegas Culinary Workers Union won a contract requiring employers to bargain before implementing AI in the workplace.
The International Longshoremen's Association banned fully automated technology outright in their last contract.
Connecticut Senate Bill 435, currently being debated, would make AI use a mandatory subject of bargaining for all public-sector employees in the state. Other states are watching. (Yankee Institute on CT SB 435)
If your employer is deploying AI in ways that touch your hours, wages, surveillance, evaluation, or job security, you have power you may not be using. The American Federation of Labor and Congress of Industrial Organizations, the Communications Workers of America, the Service Employees International Union, the International Alliance of Theatrical Stage Employees, and dozens of other unions have all published their AI principles. Read them. (UC Berkeley Labor Center analysis of union AI principles)
Not every workplace needs a union. Every workplace needs workers who understand they have the right to organize one.
This is the same position labor took during industrialization. Same position during the rise of computing. Same position now. Every prior wave of technology required organized labor at the table to make sure the gains did not get captured entirely by capital.
AI is no different. The pace is faster. The stakes are higher.
What I am watching next
A few things every leader should have on a dashboard this year:
OpenAI's potential fourth-quarter 2026 IPO. This will be the test of whether the AI bubble has fundamentals or pure momentum. If the IPO stumbles than the dominoes will fall fast. (Fortune coverage)
2022 and 2023 venture fund vintages hitting their five-to-seven-year mark in 2027 and 2028. Expect fund closures, secondary sales at deep discounts, and a wave of uncomfortable conversations between General Partners and Limited Partners.
State-level AI labor legislation. Connecticut, Minnesota, California: all running parallel experiments. Whichever framework passes first becomes the federal template. (Felhaber Larson legal analysis)
The first major AI-addiction lawsuit against an employer. It is coming. The legal theories are already being tested in cases against Character.AI and OpenAI. When the first big employer gets named for knowingly deploying addictive AI tools without warnings, the entire compliance landscape shifts overnight. So, really think about this right now.
The bottom line
The AI revolution is real and the AI bubble is also real - they are both true at the same time.
The companies that survive what is coming will be the ones that:
Treat AI as a tool, not a religion or another stupid tech bro hype.
Protect their people from the addictive design patterns built into these systems.
Stop letting investors (venture, private equity, or otherwise) push out the founders who built the value, employees who gave their soul and many more in between as this causes many casualties of war.
Invest in real reskilling and not performance theater! It is no stupid. Skilling leads to performance and not the other way around.
Welcome organized workers to the table when AI deployment gets decided.
The smokescreen is falling as the “Distributions to Paid-In” numbers tell the story. The 95% zero-return-on-investment number from Massachusetts Institute of Technology tells the story. The 560,000 weekly users showing crisis signals tells the story. I am writing this article to help you put all the dots together because people like me, see patterns way before many others and I am using Ai to help my writing and bring it down to simple digestible terms - but I wrote this article because I see the patterns - without Ai.
Leaders please step into this moment or you will get exposed by it.
Markets, Employees and Courts will expose you if you don’t. It may be an employer market right now - but we all now - it will shift again as what goes up - always goes down as that is the cycle of life in this Darwinist world.
I would rather we get ahead of it.
What I am asking you to do
If you are a CEO or founder: read this newsletter to your leadership team this week. Pick one of the five items above. Start. It is an education of the future to come - question is whether we control it or not and act now.
If you are a Chief Human Resources Officer: take the AI policy your Legal team gave you and please throw most of it out and write a real one with the leaders of the company and use your values.
If you are an employee: pick one AI skill, learn it cold, and tell your manager about it on Monday. While you are at it, find out what your company's AI policy actually says. If you cannot find one, that is your answer.
If this hit something for you: forward it to one person who needs to read it.
If you got it from a friend: subscribe. The conversations we are having here are the ones nobody else is willing to have out loud.
See you in a week or so. Working on my modern-day Luddite piece next.
Best,
Kristy
P.S. I am in active federal complaint/litigation that informs how I see some of what I wrote in section two. I will not name the parties here. The patterns I described are not unique to any one fund or company. They are how this industry currently operates. The fact that I am living through one version of this story is exactly why I will not stop telling it.