Today my department received news it didn't want to hear - four of our colleagues were being let go and for now their workload would have to be shared out among the whole department. A department that after an earlier round of layoffs in 2025 was already crushed under their workload. To say that people in the department were alarms, concerned, frustrated, etc. goes without saying. The words on everyone's lips was that this was the start of the "AI layoffs" which most of the department has been expecting for a while.
As a person who works with and/or adjacent to AI, there is a lot of hype or maybe myths that these things are the reason that we will have many problems in future. AI is neither the panacea that it is being sold as, nor is it the dystopian wasteland that others see it as. As with other previous technology waves, the truth lies somewhere in the middle. The problem is that like the dot com period or the cloud computing period, or the much shorter crypto period.....a crash with lots of disappointment will have to come first. And how much economic devastation that will bring depends a lot on how things play out in the very near future.
When looking at AI and the promise it brings - and I have worked with the LLM AI that most people are familiar with (Claude, ChatGPT, DeepSeek, Gemini, Grok, etc.), but also agentic AI, and even have demo'd VLM video AI modeling - it is important to understand that AI is a technology, it is software mixed with hardware (servers, storage, etc.) and rather than bringing about the apocalypse, we are much more likely to see mundane bugs and clear limitations as time goes on. And it is for this reason that I worry about the dangers of the short term versus the promise of the long term.
Short term has very real concerns with companies embracing AI without fully understanding its limitations. And companies trumpeting their use of AI, when what they claim to use it for doesn't exist and is an empty bag. Companies are racing to build data centers regardless of capacity planning or even needs and are going about it in a reckless way. OpenAI, Anthropic, Meta, Anduril, and other AI companies are borrowing money from each other, from Private Equity companies, and even in some cases investing in derivatives (remember those from the 2008 housing crash?) to raise money to build huge data centers. When AI turns out to be just another mundane new tool that can be used in limited ways (like all technology) I fear that these loans embedded in loans and data centers, built or being built will be abandoned and cause a very nasty global recession.
Let me explain from the technology industry coal face: LLM AI models will help workers in building slides for customer presentations, ingesting data and outputting graphs and data analytics, helping workers/engineers summarize issues that their own customers are seeing. But AI cannot do their job for them, they cannot make phone calls, they cannot troubleshoot problems diplomatically, they cannot do a lot of the things that every day workers do. Can they make those worker's jobs a bit simpler or easier? Absolutely. But the day to day task of most workers or engineers are not fully replicable with AI. Also as I mentioned AI is software married to hardware with the problems that come with that. I have been around for many huge technology roll-outs and the first 5 years is spent finding defects or flaws in the configurations and patching or fixing them. It won't be any different with AI. It will not change everything immediately, there will be trials and there will be errors. Humans created it, so it will have all the flaws and foibles that we bring to it.
Agentic AI has a lot more promise - these are very specialized AI models embedded in devices or systems. For instance, agentic AI inside of routers and switches as part of the internet backbone (corporate or otherwise) that can monitor traffic on the network in real time and if something goes wrong will work to resolve it based on the collective knowledge it has inside. It is a self contained AI that is used for a very specific purpose. It could revolutionize financial technology, network security, risk management, and HR. But again, it is not part of some large AI "monster" but rather is limited to the goals it was designed to do.
VLM or video AI has some pretty amazing potential as well, but also many dangers around privacy. VLMs can be used in limited ways for building security (schools, restricted buildings, etc.) and localized law enforcement. The real danger here is that a system that can analyze tons of data around video and/or pictures can violate privacy and utilizing facial recognition software in partnership with it to track individuals and society. The recent ICE/CBP raids in Chicago and Minneapolis should give us pause over the use or abuse of VLM AI.
So AI technology has a lot of good potential to help in a lot of areas. But right now we are riding a huge wave that is saying it is going to fix or destroy everything depending on your views of it. I think that the current irrational exuberance for AI will lead in the short term to instability and a financial crash is not out of the question. Companies moving quickly may cut staffing and turn over large portions of their systems to AI, and then later realize what AI can and cannot do.
But longer term I think AI will be of benefit for us all, but only after we learn to understand its limitations and think of it as tool, not as something that is going to destroy the world.