In the fall of 2021, I began using Open AI’s API to analyze transcribed voice recordings and return intelligent responses. Since then, I’ve continued to explore and refine my prompting techniques, drawing insights from the various AI tools and information sources I regularly consume. Wanting to deepen my understanding, I recently read Prompt Engineering by Lee Boonstra.
The book reinforced what I had learned about configuring an LLM through an API—tuning parameters like output length, temperature, top-K, and top-P. Choosing the right values for each depends on your use case, and each parameter affects the others. The clearer your prompt, the better your results. What stood out most to me were the defined techniques for interacting with LLMs. It seems crucial to have a shared vocabulary around this emerging craft. Some of the techniques covered included:
Zero-shot, one-shot, and few-shot prompting
System, contextual, and role prompting
Chain of thought, self-consistency, and tree of thought prompting
Step-back prompting and decomposition strategies
I would recommend googling some of these if you are not prompting with intent while using AI tools. I expect many technical people already are but there is probably a vast amount of untrained people on the subject.
It’s fascinating to witness these naming conventions form in real time as we work to define patterns across how people interact with language models. From basic Q&A to workflow automation and agentic systems, the timeline of AI development is accelerating. Yet, there’s a disconnect: the capital investment in AI outpaces the maturity of the technology itself.
I use tools like ChatGPT, Claude, v0.dev, GitHub Copilot, and Cursor regularly in my life and work. These tools are incredibly useful, but it’s seems that we’re in a hype phase. These tools need review and well thought out prompts in order to produce results. The larger the project the less capable they become is my experience. Nations are pouring vast sums of money into AI development and infrastructure. But is this all just about ai software engineers, chatbots and workflow automation? Im not so sure.
This makes me wonder: what aren’t am I not seeing? I rarely encounter broader discussions about where AI is truly headed. Imagine Country A invests heavily in AI infrastructure—then suddenly all businesses within that country (and beyond) are expected to integrate with it. That means most businesses will need software built to transform traditional workflows into AI-native processes. Throw in autonomous cars and robots - there is a lot of work to do.
Behind every intelligent system is a silent layer of infrastructure—code, integrations, model orchestration, security protocols. All of this needs to be designed, developed, and maintained. I don’t imagine it will be built by LLMs alone.
If we imagine every system and business process being rewritten to work alongside intelligent agents, a massive new layer of engineering, integration, and long-term maintenance will be required.
To build for this future, we need to understand the emerging AI software stack:
Foundation models: LLMs, vision models, speech models
Middleware: Vector databases, orchestration frameworks (LangChain, CrewAI, AutoGen)
Interface layers: APIs, agents, UIs, voice interfaces
DevOps for AI: Prompt testing, versioning, fine-tuning pipelines, evaluation metrics
Data governance: PII handling, security, hallucination mitigation, IP protection
This stack is still immature and fragmented. There’s no "HTTP for AI" yet—no standard layer that makes intelligent systems interoperable and durable. As a result, the demand for AI-native software engineers is growing rapidly.
I think companies will be looking for
Fine-tune models on domain-specific data
Chain tools and models into multi-step workflows
Align LLMs to an organization’s tone, goals, and data
Monitor model performance, drift, and hallucinations
Build AI-first UX/UI patterns
Design multi-turn experiences that feel adaptive, not brittle
Treat AI like a living system, deploy models, monitor systems, reduce infrastructure costs.
Create CI/CD for prompts, embeddings, and evaluations
As someone who writes software for a living in 2025, I sometimes feel fear of change. In an effort to keep doing a job I enjoy and pay the bills, I am determined to keep educating myself and adapting as needed. There is definitely a part of me that hopes the AI trend is a bubble or a solution to the environmental issues are solved. AI technology is not something I think everyone wants. The heart of capitalism definitely is excited by the thought of increasing profits through automations.
Whether we like it or not it seems like we are entering an era where intelligent systems will be embedded in everything and as that happens, the software powering these systems will quietly become one of the most important layers of modern life.
While I think there is some time for all this tech to develop, maybe Sam Altman’s AGI will poof the world into full automation mode sooner than I expect. Or possibly we continue on for 10 years before the technology comes to fruition. I guess we will have to ride the wave.