AI as a Tool-Maker instead of a tool


The Core Idea: AI as Your Personal Toolsmith

The way we use the current crop of AI tools is rooted in the traditional idea of our relationship with tools. While Large Language Model (LLM) Clients appear to act like “smart assistants” that answer questions and perform some tasks, there is another, possibly more powerful way to use them: as a tool-maker. Instead of just using pre-built applications, you can now direct an AI to create small, hyper-personalized tools that make you more efficient. This represents a significant shift in methodology.

This idea crystallized for me recently while experimenting with creating my own tool scripts for local LLM clients (MCPs). The release of features like “Claude Skills” shifted my perspective. It simplified the process, emphasizing the tool you wanted and the instructions to use it over the code itself. I realised that the tools I was creating for the LLM Client were just as useful to me without the client.

But what exactly do we mean by a “tool” in this context?

LLM model tool use has evolved rapidly. It is however important to note that it is the LLM client that is the key to this capability. The LLM requests the client to use a tool and tell it what the result is.

What is a “Tool”? (Hint: It’s Not Just Code)

We often assume that a “tool” has to be a complex application like a spreadsheet or a web browser. A tool is any thing that solves a specific problem for you. Think smaller, simpler, and more personal. Often, recognizing these needs requires us to examine our invisible habits and the pains they cause.

These are the kinds of tools you can create in minutes:

  • A reusable, complex prompt that summarizes your meeting notes in the exact format you need.
  • A spreadsheet that shows your expenses by categories you care about.
  • A semantic analysis of your writing that checks for cohesion.
  • A series of small utilities that download and analyze your web server logs.

These are very specific needs and circumstances. Having these tools makes my life easier, but it is unlikely for someone else to have the same very specific needs. No one else is going to make this specific tool for general use.

I recently built a tool to analyze the semantic cohesion of my blog posts, using AI assisted code generation. It started out as a tool for AI assisted writing, but I ended up using the tool myself. I added some graphs to help me interpret the output instantly. It is now a useful tool for me, but it is also very specific to my blog file syntax and how I structure things. It is a perfect example of a personal tool that would never exist as a commercial app.

  • Fabric for a set of robust prompts

The Tool-Maker Approach: Enabling Yourself

The key to this approach is to stop treating the LLM as a universal tool and start treating it as a skilled tool-maker. You are the expert and the architect; the AI is the builder. This is the core of strategic anchoring: committing to a methodology of tool creation rather than just tool consumption.

Instead of asking, “AI, what’s the answer?”, you ask, “AI, build me a tool that can help me find the answer.” This shift is subtle but profound. It moves you from being a passive user to an active creator, directly enabling yourself to be more effective. At this point it becomes clear that the tools you can build this way are much broader than just the tools that you can build by hand.

I started using my LLM client not to do the work, but to help me build and maintain the little tools that did the work. The LLM was not the log analyzer; it was the tool that got me the log analyzer. It had become my tool-tool. If this were Star Trek, I’d be using a replicator to print the wrench I need to fix the warp drive. Some LLM clients have become my personal replicators.

The Low Barrier to Entry

You do not need to be an expert programmer to become a tool-maker. The primary requirements are your own expertise in what you need and the ability to give clear instructions: prompt engineering. Modern AI clients, especially those running on your local machine (like CLIs or IDE extensions), often have the basic capabilities needed to create fully functioning tools. The minimal capabilities required of the client are surprisingly low: reading and writing local files. Even web based clients can get you most of the way there, but may require some additional steps.

Often, the “tool” is simply a well-crafted set of instructions that the AI can execute. The complexity lies in defining your problem clearly, not in writing complicated code. This accessibility means you no longer have to wait for the perfect app to appear. It highlights that code is just language used to express your ideas.

Local Clients are Powerful

  • CLI: Command Line Interfaces like Gemini CLI or OpenCode.
  • IDE: Integrated Development Environments like Cursor or VSCode extensions.

These environments give the AI the agency it needs to create and save the tools you design.

Without access to your local computer, you will have to do some copy and paste work yourself, but it will still work.

Addressing your own Need

The tool you build for yourself does not need a polished UI, a marketing plan, or scale to millions of users. It only needs to solve your problem. Because the AI can help you build it in minutes, the time investment is minimal, and the payoff in personal efficiency is enormous. And you can iterate on your design, refine, extend and improve.

This approach also aligns well with the idea that your tools don’t need to be apps. Simple scripts and static outputs are often more effective and easier for AI agents to work with.

I hit this wall with web server log analysis. The existing tools were bloated, expensive, or incompatible with my needs. So, I made several small tools instead: one to download logs, another to consolidate them, and a third to run my specific analysis. I even added a rudimentary local front-end display. Each step was a small, useful, and personal solution.

A key consideration is to break up your needs into small steps. This has been the unix philosophy. Restating the core principles in our context:

  • Make tools that do one thing well
  • Make tools that work together
  • Make tools that handle text (JSON, Yaml) as input and output

Case Study: Custom Log Analyzer

To see this “personal tool solution” in action, let us look at how the log analysis toolset came to be. I did not start by writing Python code. I started by figuring out what I wanted from the logs and then defining how to get it. In particular, I was interested in finding out what AI agents were looking at my website.

Step 1: The Fetcher

I asked the AI: “Write a bash script that downloads the CloudFront logs from my S3 bucket.”

Result: A simple script fetch_logs.sh that I can run anytime I need fresh data.

Refined: I added some logic to skip files I already had, streamlining the process. I also added some command line options for convenience, like the destination directory.

Step 2: The Analyzer

Next, with a consolidated JSON file, I asked: “Create a script that reads that JSON and returns the User-Agents binned by hour and writes it out as a JSON report.”

Result: analyze.ts that generates a nice JSON object with binned data.

Refined: The User-Agent needed a bit more filtering to identify the AI agents. I also added geo-ip lookup to bin requests by country and some other stats. Of particular interest was the 404 responses and what the scanning bots were looking for (Hint: .env is a favorite snack for these beasts). The key here was understanding the shape of the data.

Step 3: The Viewer

Now I had a large JSON object with interesting data, which was just inconvenient enough to work with. So I asked the AI: “Take this JSON file as input and give me a static web page to display the sections as a table.”

Result: A index.html with corresponding style.css and a simple app.js that reads the JSON file. I serve this file locally with http-server to view the data as a set of tables. This confirmed that for viewing data, a site is not an app, and a simple static page is often enough.

Refined: As I added more interesting analysis, I added a simple drop down to select the specific data set I want to view as a table. I really should add some graphs to the display too.

I now have a pipeline: Fetch the data, analyse the data, display it to me. Note that each step is very specific to my infrastructure and my needs.

This follows the “Unix Philosophy”: small tools that do one thing well and pipe data to each other. In the above set of scripts I opted to use a single data.json file as the intermediary, but if my need changes, I can always alter my tools.

Your Turn to Build

My log analyser won’t help you, but you certainly have your own very specific needs to address. Take a moment and think: what is one small, repetitive, or annoying task you do?

  • Format copied text into markdown?
  • Download and collate several files for a single piece of insight?
  • Check a spreadsheet for a specific but annoying set of conditions?

Now, instead of wishing for a solution, or asking the AI to do it for you, imagine describing that task to your AI and asking it to build a small tool to help you with it. It requires breaking old habits and trusting that code is just a medium for your ideas.

This technology has given you a way to get the tools you need: make them yourself. BE the Tool-Maker!

Small repeatable tools are:

  • Not prone to hallucination
  • Not dependent on any specific LLM
  • More cost effective than a full LLM

Loading comments...

Leave a Comment

This will be shown with your comment.
Maximum 2000 characters. HTML is not allowed.
Author's photo

About Heinrich Venter

A technology leader, he is passionate about mentoring developers and growing teams in learning environments that foster impactful solutions.