Orbit Outlook: Coding Was Always Just a Tool

If you spend enough time scrolling through LinkedIn or skimming Medium and Sub stack newsletters right now, you’d think the end of the “IT job” is already here. With the rise of coding agents like Claude Code, Cursor, and Copilot, the prevailing narrative is that software engineering as we know it is over.

What is often encompassed as IT

But this panic relies on a fundamental misunderstanding of what “IT” actually is. People talk about the tech industry as a monolith, but it is actually a massive hub of distinct disciplines. When we say “IT jobs,” we are talking about:

  • The Deployment Lifecycle: DevOps and QA professionals ensuring that what gets built actually survives contact with reality.
  • Infrastructure & Security: Networking professionals, sysadmins, and security experts keeping the hardware and software foundation running.
  • The Data Ecosystem: Data governance, database design, data analysts, data engineers, and data scientists building the pipelines and extracting the signals.
  • Traditional Application Development: Frontend UI developers and UX designers crafting the experience; full-stack developers connecting the dots; and backend developers wrestling with Java, Python, C, and C++ across endless frameworks.

Some of them don’t even code at all!

The Domain Dictates the Tool

In reality, the tech ecosystem is a massive, fragmented hub of distinct disciplines and domains, each with its own specialized architecture:

  • The Environment: In a traditional bank, you might be strictly doing Java Enterprise development. In an ML team, you’re orchestrating Scala, Spark, and Big Data technologies.
  • The Infrastructure: Your machine learning role might be entirely on-prem dealing with custom PySpark clusters, or you might be in a cloud-native environment relying on GCP’s Vertex AI and Dataflow.
  • The Data Reality: A financial institution relies on tightly governed Postgres databases. An AdTech startup is wrestling with massive, chaotic data warehouses. A SaaS company might be running custom client pipelines synced across multiple clouds.
  • The Role Itself: A Data Analyst at a bank might spend their day building PowerBI reports for compliance. A Data Analyst at a startup is essentially a strategic advisor, paid for their human judgment and decision-making advice based on raw data insights.

The Scale of the Technology Gap

And although AI tools and agents help with every single one of these domains, we have to remember why these jobs exist in the first place: they were created to bridge a technology gap at a specific scale or in a specific industry.

These roles are and have been ever evolving, as technology matures, these roles will simply evolve to use different, AI-assisted tooling systems. It’s not like massive enterprise companies will suddenly fire all their integration engineers just because AI makes writing CI/CD pipelines and unit tests easier, or even automates it, but that is not what all a QA does, QA ensures the code is doing what its supposed to do. The need for a dedicated role is dictated by the complexity and scale of the business, not just the difficulty of the syntax.

In my previous team, we never had dedicated DevOps or integration engineers. Deployment, integration, and infrastructure were just part of our job as Data Scientists and ML Engineers. With AI tools making that easier, a team at our scale certainly won’t need to start hiring dedicated DevOps, especially now, when we have supercharged AI tools in our hand, but that’s primarily because our service model just doesn’t require that level of isolated specialization. But in an enterprise where that massive scale is required? That specialization isn’t going anywhere. Those engineers will just use AI to manage a vastly more complex infrastructure, and their domain might just shift to be highly specialized.

The Evolution of Automation

Long before Large Language Models arrived, we were already trying to automate the grunt work of development. We had smart IDEs with auto-complete. We had out-of-the-box SaaS solutions. We had companies built entirely around visual scripting and low-code platforms. We even had outsourcing for boilerplate tasks.
Remember when drag-and-drop builders were supposed to end frontend development? Or when AutoML was going to replace Data Scientists and ML models overnight? They still were very popular and both domains coexisted with heir own use cases, both catering to different audiences and different types of jobs, no doubt AI tools are stirring up the market even more than before.

But AI hasn’t magically birthed the concept of automated coding, it has just made it radically more accessible.

I’ve spent the last few months deeply experimenting with the current landscape of AI tools. I’ve thrown prompts at ChatGPT, Gemini, pushed VSCode integrations to their limits, tested Antigravity, wrestled with Claude Code, and integrated Cursor into my daily life. I wasn’t interested to see if the ‘hands-off’ development dream was real. I just wanted to save time and iterate faster.

The Early Days

When AI Couldn’t Do Math

My skepticism of the ‘AI will do everything’ narrative comes from direct experience. While I was doing my BSc in Physics and Astronomy, right around late 2022 and 2023, the ChatGPT hype exploded. Like any overwhelmed student balancing a full-time Data Science job with a full-time degree, I tried to use it. In the absence of peers and a study group, I wanted it to help with assignments, double-check my solutions, and unblock me when I was stuck.

It was terrible.

At that time, it couldn’t even report a number correctly. For physics and astronomy, it was useless – it would confidently apply the wrong formulas, hallucinate constants, and provide textbook-level answers that were fundamentally flawed, looking at it, even I could tell that it was wrong. Real textbook solutions, tutors, or even Chegg were infinitely better. I quickly gave up on using it as a scientific thinker.

The ‘Ctrl+C, Ctrl+V’ Evolution

But while it failed at physics, I found its real utility: it became my ultimate substitute for Stack Overflow.

My coding style has always been highly pragmatic. I iterate fast. The code doesn’t need to be a work of art on the first pass; it needs to work, and it needs to be doing the right thing. I am not ashamed to say that, if anything, I was a “Ctrl+C, Ctrl+V” developer when it came to boilerplate. Chatbots were perfect for this. I could describe the exact snippet I needed, paste an error trace, and get immediate debugging help instead of digging through forums.

My AI Assisted Workflow Today

Fast forward to today, and AI coding tools have made incredible leaps and bounds. And of course people have an orchestration of agents running, custom skills and completely automated system of agents, but that doesn’t fit my use case and I don’t have the time or patience to set such an elaborate arrangement, pay for it, wait for it and then QA it and manage it, my use case is very hands on and my domain, data and tooling is ever changing. After testing everything, my core workflow has settled into a very specific rhythm: Cursor, paired with Jupyter Notebooks for data science and ML and then for mostly Python based applications, pipelines and apis for productionizing.

Here is why cursor:

  • VSCode with Copilot: Copilot didnt have a planning mode when i used it, but cursor did, also copilot didnt work well with jupyter notebooks at the time, and cross file changes (atleast in my use cases it didnt – i had to attach every file and it would still miss important logic)
  • VSCode with Claude Code: i tried Claude code extensionn, first there is no model choice, it took too long, didnt work well with notebooks – i didnt spend much time on it because i already had a good working option (Cursor) at the time, also i looked up Claude Code was better in CLI, and my work right now atleast involves jupyter notebook, maybe when i’m creating a pipeline i will reference it again
  • Antigravity: i love everything google, but usage limits, cursor and copilot provide better thinking with the ability to select model, antigravity multi agent workflow seems confusing initially, antigravity shines at UI (from designs, concepts to actual multi file implentation)
  • Zed: heard is more compute friendly, no bloat IDE, but does not have notebook support

Today, I can describe my database connection and the high-level analysis I want, and Cursor will generate a remarkably solid implementation. But here is the critical distinction: this combination is powerful, but it is entirely hands-on.

I run the code cell by cell. It is readable, step-by-step code that I modify, build upon, and most importantly correct. Because while AI has become a fantastic implementer, it is still not a great thinker.
Because of these tools, I will never waste another 20 minutes writing boilerplate plotting functions for matplotlib or setting up standard logger classes from scratch.

But the AI is not the scientist. I am.

Where the ‘Autonomous Agent’ Illusion Breaks Down

Tech companies claim their agents can operate autonomously – that you can just spin up an “Analyst Agent,” give it a high-level goal, and watch it work.

In practice, especially in data science and physics, this falls apart fast. I work with novel use cases, multi-terabyte data, and raw, messy, noisy signals, multiple data channels not to mention changing domains. If I have to spend days perfectly cataloging every quirk of an existing dataset and explaining the physical constraints of the universe to an AI just so it can understand the problem only for it to act as a black box and still mess up it’s not saving me time.

Here is exactly what the AI can’t do:

  • It doesn’t understand the context of the data: It doesn’t know where to source niche satellite telemetry or how to properly extract signals from noisy, physical sensor data.
  • It makes rookie mistakes in EDA: It will blindly run statistical tests or normalize data without understanding the underlying physical distribution or the real-world constraints of the dataset. It wont even use right method to normalize.
  • It cannot think through a complex scientific process: It struggles with the overarching architecture of a multi-step, exploratory research problem.
  • The explainability: as a service provider / insights provider – we need to understand to be able to explain when we design a model its undergone data analysis, we have identified features, or at the very least we have evaluated

Coding is just the translation layer between human intuition and machine execution. AI has simply made that translation layer faster.

Agent-to-Agent Dystopia

And yes, I know the ultimate counterargument: “Why do we even need code if an AI agent can just talk directly to another AI agent?” If agents just bypass code and talk to each other? The internet and the digital world would just be managed by autonomous black boxes, and agents talking to other agents to retrieve information with almost no logical deterministic code creating insights. The vision is an internet with no IT companies building software, no rigid APIs, and no deterministic logic. Just autonomous agents bypassing code to talk to other agents, retrieving information, and generating insights dynamically on the fly.

Let’s unpack how catastrophic that actually is.

End of Determinism

Code, at its core, is a contract. If X happens, do Y. It is deterministic. When you build a REST API or a data pipeline, you are defining the exact boundaries of how information is stored, transformed, and retrieved. Large Language Models, however, are probabilistic. They do not execute logic, they predict the next most likely token.

If we replace the internet’s software infrastructure with autonomous agents talking to other agents, we are trading deterministic contracts for probabilistic guesses.

The Cascading Hallucination

Imagine an Earth Observation pipeline built entirely of agents. Agent A observes raw satellite telemetry and summarizes it for Agent B. Agent B interprets that summary and generates a weather insight for Agent C. Agent C decides whether to issue a disaster warning.

Because there is no rigid code, there is no traceability. If Agent A hallucinates a data point, Agent B treats it as fact, and Agent C triggers a false alarm. In a purely agentic internet, we are playing an infinite, high-speed game of Telephone. Without code to define the protocol, errors don’t just happen; they compound exponentially.

The Social Black Box

Socially, this opens a massive can of worms. If information is constantly being summarized, re-interpreted, and served by black-box agents rather than retrieved verbatim from a database, who owns the “truth”?

  • The Bias Amplifier: We already know models carry the biases of their training data. If agents are constantly consuming the outputs of other agents, we enter an algorithmic echo chamber where biases are amplified and marginalized data is simply smoothed out of existence.
  • The Death of Auditing: You cannot audit a vibe. If a traditional algorithm denies someone a loan or misidentifies a demographic, you can review the code and the SQL queries. If a web of agents makes that same decision based on “dynamic reasoning,” there is no code to review. You have no way to prove whether the system acted maliciously, erroneously, or fairly.
  • The Manipulation of Reality: In an agent-only internet, the concept of a primary source vanishes. Information becomes fluid, dynamically tailored by the agent serving it to you. It is the ultimate tool for reality manipulation because there is no raw, underlying html or deterministic database query you can point to and say, ‘Look, this is what the data actually says.

Here is a simplified explanation of all this described above (i love notebookLM)

I am aware that there are protocols like MCP, A2A and multi agent orchestration rules, best practices and success stories, I have myself read some by Anthropic. But there is a difference between isolated tasks / products and the whole system and I think the adoption concerns are still valid. And for what’s it worth they still require AI Engineers creating RAG pipelines, .md files etc.

Topics like these require their own essays, but they prove the foundational point – code a powerful tool, its not just a tool for building apps. It is the mechanism by which we enforce logic, accountability, and truth in digital systems. And it has different forms, which have evolved over time and will continue to do so.

The Big Myth: LLMs Are Not Specialized ML Models

But even if we ignore the ethical nightmare, the fundamental role of a scientist remains unchanged. When we are trying to understand the universe, we need to work it out ourselves for the exact same reason we still learn math after the invention of calculators.

Understanding is a prerequisite for exploration and innovation. As scientists, we must know how our tools work. We need to measure their reliability, map their biases, and calculate their confidence intervals. If we don’t understand the tool, we cannot trust it to do science. Hence why, despite all the AI hype, we still prefer getting our hands dirty in Jupyter Notebooks

In my day-to-day work, I use AI in two very distinct ways: as a coding agent to speed up implementation, and as a generative model for broad insights or text processing.

But the biggest misconception driving the current AI hype cycle is the total conflation of Large Language Models (LLMs) with specialized Machine Learning. People assume that because an AI agent can chat with you fluidly, it is replacing the data scientists/ML Engineers who develop models to predict, forecast, and assess human behavior or physical systems.

It isn’t.

  • Gemini isn’t predicting the weather. When you ask an AI if it will rain tomorrow, the LLM isn’t calculating atmospheric physics. It is simply fetching and summarizing the output of a specialized Numerical Weather Prediction (NWP) model or a spatiotemporal ML forecast – the exact kind of systems we’ve spent years building.
  • LLMs aren’t doing complex time-series forecasting. An AI chat interface cannot ingest 10 years of proprietary, messy, unnormalized sales data and accurately forecast inventory for a specific retail store next quarter.
  • They aren’t running behavioral ad-tech. While an LLM can generate marketing copy, it is not running the real-time, high-dimensional vector similarity searches required to identify semantic phenotypes and lookalike patterns from millions of raw behavioral logs.

Language models are incredible translation layers. They are not predictive mathematical engines grounded in physical reality.

Conclusion: The Science is the Destination

As someone who has spent the last few years productionizing complex machine learning systems and is now shifting my focus toward astrophysics and Earth Observation, I’m simply not buying the panic. Why? Because I’ve always viewed coding as a tool, not the destination.

Coding is a tool, a language that defines deterministic logic for a machine to follow, serving a purpose like performing calculations you can’t do yourself.

Machine Learning is just an optimized way to do those computations, approximating loss functions to either uncover hidden insights or generate predictions based on real-world patterns.

LLMs are simply one specific type of ML model. They have been trained on massive amounts of text to be able to suggest things like the right code for your use case, and that’s exactly what they’re good at. Yes, today’s AI tooling has made it possible for LLMs to be leveraged into agentic workflows, taking on roles not just as implementers of code, but as evaluators, instructors, and executors. But make no mistake: these agents are still just tools for the human to wield (that is my opinion at least).

At the end of the day, I am still the one deciding if we need a threshold or not, and where, what model do we need, does it make sense to use this or not. In my workflow, I use AI to create the cells but I am still running the cells. I am still the one looking at the visualizations, interpreting the anomalies, and deciding what the next logical step in the pipeline needs to be. Coding is just the translation layer between human intuition and machine execution. AI has simply made that translation layer faster.

For those of us working on solving complex, unstructured problems whether that’s spatiotemporal forecasting, managing disaster response via satellite data, or modeling the cosmos AI isn’t taking our jobs. It’s just taking out the trash so we can focus on the actual science.

Its a powerful tool, use it!

Share:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *