Folks aren't forming memories like they used to

- 9 min read - Text Only

Used to saying the same thing over and over again to an computer? How about a person? A living person? I've found myself in that position lately.

You don't have to be a $200/mo plan maximalist to observe how agents forget things over and over. They have a context window — and so do people too when they context switch between too many things at once.

Before all this AI bubble craze, I found my limits as a manager. It was easy to manage and direct two projects. Three got difficult when I still had to contribute or direct specific details. Four is where things broke down. I had to externalize my memories, ideas, opinions, and plans to written form. The bullet journal method for one helped me pretend to have more active memory than I biologically had. I have techniques that work for me when I need to switch between contexts every day.

Even with external memories, I struggle if I actively switch across more than four contexts a day. It's like I can't "load" more than four because "unloading" is only realized through unconsciousness. After work, I might take an hour long nap if it was an especially cognitively-burdensome day so I could get back to my own personal life with my renewed.. context load budget or something.

As companies lay off / fire more of their people in the name of AI — but really the economy, paying for more data centers or increasingly expensive chips, and the evaporation of zero-cost loans and investments — their employees are having to handle more contexts than before with the sparkles of AI. And it's breaking them.

Those still on payroll are burning out and they don't know why. They're getting better at prompting these LLMs and the vendors are releasing more capable models every few months. They're getting more done than before. They're handling more contexts than before.

If LLMs are doing more of their work, how could they be burning out? Folks are context switching too much. Without the time to appreciate the results of their work and to form memories around its creation, they're losing the drive that enables them in the first place. Instead, all that remains is some continual swipe-right haze — an undefined reward that reinforces the use of these tools and not the lessons learned while using them. They're pressured from above to do three to five two-pizza-teams of work alone all while in the process forming a dependence to Anthropic and OpenAI.

Cendyne holds and sniffs a flower. It comes from a corrupted ground

What's wild and weird for me is how learning and critical thinking are eroding before my eyes in my coworkers. My management style — while I had a team to manage —  is to gradually inform and lead others along a thoughtful pathway to reliably come to the answer on their own. My techniques, Instructional Scaffolding and Discovery Learning, stopped working. I am unable to guide others to be independent when they're forgetting what I told them two days ago.

Im having this weird cognitive dissonance where Im using the same tactics for a forgetful LLM agent with a person. I have to gradually refine my memos and share them multiple times a week. Each has a very condensed and to the point communication (unlike his GPT gen work) with links to other memos.

As the only remaining technology person, every weird tech-shaped email is sent my way with the question on if its real or a phishing attempt. Sometimes it is an automated notice about something we did together only two hours earlier. In situations like this I think: how did you forget how to read? Where have your memories gone?

It has become easier to draft up a response for them to plagiarize than to equip them with the knowledge and resources to independently and competently act and communicate. I can put <TODO look up % value here> and the other will execute my instructions to fill out these details. It's like pretending to be an agent to someone who only knows how to interact with agents. And… they act like an agent too in this exchange.

There's a new balance I'm having to figure out each day on where another's critical-thinking capacity is at. If I guess too high, they get confused and don't act. If I guess too low, they get annoyed.

This is a problem. Ownership, creativity, innovation, and skill is eroding in the workplace (and education). At a scale never encountered before, individual contributors are faced with a situation that requires managerial and leadership skills to succeed. Instead of swimming, folks are sinking under the pressure. They are giving up the skills that got them where they are and they lean on LLMs to maintain their facade of outward productivity.

Anthropic and OpenAI aren't the only party to blame here. Economic conditions are forcing leaders to shrink headcount without shrinking the scope of their business. They lean on the small truth that small teams (read two-pizza teams) can iterate faster than multi-layer organizations. They flatten the org, reduce management, increase the reports-to-manager ratio, and throw token credits and quotas on every individual contributor in hopes that they'll remain competitive to the startups using AI tools to nip at their SaaS. (See $300 Billion Evaporated. The SaaS -Pocalypse Has Begun (archived).)

Leaders of previously two-pizza teams will be forced to learn their context switching limits the hard way or sink into this AI-enabled zombie form I described earlier. SaaS companies aren't going to fail because someone can vibe-code their product in a month. They'll fail because they've reorganized into zombies and produce results like Microsoft.

Its so fascinating to see the changes in one companys logo over the years.

A bunch of microsoft logos from the 70s, 80s, 90s and today but instead of saying Microsoft all of them spell Microslop in the vibe of the original logos.

Microsoft happens to be signaling an intent to turn about in the face of their degrading reputation. We'll see if they really recognize the systemic issue of turning their staff into AI-enabled zombies that fail to push back on stupid ideas like AI in notepad.

Our commitment to Windows quality

Hello Windows Insiders,

I want to speak to you directly, as an engineer who has spent his career building technology that people depend on every day. Windows touches more people's lives than almost any technology on Earth. Every day, we hear from the community about how you experience Windows. And over the past several months, the team and I have spent a great deal of time analyzing your feedback. What came through was the voice of people who care deeply about Windows and want it to be better.

Today, I’m sharing what we are doing in response. Here are some of the initial changes we will preview in builds with Windows Insiders this month and throughout April.

shrug

The "data is the new oil" fad came and went on LinkedIn as the new way to squeeze value of out their company. While realized in the worst way possible (archived), selling first party data has only materialized towards customers that sell ads, trade on investments, scam people, or otherwise spy on people.

I can only hope this private-equity style of gutting your own organization like Block slows down soon.

Jack-dorsey

we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. - post on twitter

Jack-dorsey

And I hope the cost of context switching is acknowledged and realized as a human limitation.