I have been playing around with Ollama, a package for downloading and running large language models (LLMs) locally. It provides a convenient environment for experimenting with LLMs without concerns about rate limits or privacy issues. Notably, it makes it easy to run tool calling workflows, where the LLM acts as an agent that calls external tools to perform tasks.

Below is a simple example of using Ollama to run a simple LLM agent that writes and executes a Python script to accomplish a user-defined task. In this case, we will instruct the agent to write and run a program primes.py that generates the first 100 prime numbers and saves them to a text file called primes.txt.

First, we need to install the package.

$ pip install ollama

Then, we need to download the model we want to use.

$ ollama pull granite4:3b

This is a 3B parameter-version of the Granite4 model developed by IBM, which requires about 2GB of local storage.

First, we need the tool definitions. Let’s write functions for saving and running a Python script and save them to a file called tools.py.


def write_python_script(code: str, filename: str) -> str:
    """Write a Python script to a file.

    Args:
        code: The Python script contents.
        filename: The name of the file to write to.

    Returns:
        The path to the written script.
    """
    # Write the script to the current directory
    script_path = Path(filename)
    script_path.write_text(code, encoding='utf-8')
    return str(script_path)


def run_python_script(filename: str, args: list[str] = None) -> str:
    """Run a Python script and return the output.

    Args:
        filename: The name of the script file to run.
        args: Optional list of command line arguments.

    Returns:
        The output of the script.
    """

    try:
        if args is None:
            args = []
        
        script_path = Path(filename)
        if not script_path.exists():
            return f"Error: Script file {filename} does not exist."
        
        # Run the script with safeguards
        result = subprocess.run(
            ['python3', str(script_path), *args], 
            capture_output=True, 
            text=True, 
            check=True,
            timeout=1,
            stdin=subprocess.DEVNULL
        )
        return result.stdout.strip()
    except subprocess.TimeoutExpired:
        return "Error: Script execution timed out."
    except subprocess.CalledProcessError as e:
        return f"Error executing script: {e.stderr}"
    except Exception as e:
        return f"Error: {str(e)}"

Now, we can import the tools and use them with our agent. Let’s create a new file called agent.py.

import sys
from ollama import chat
from tools import write_python_script, run_python_script

available_functions = {
    'write_python_script': write_python_script,
    'run_python_script': run_python_script,
}

if __name__ == "__main__":
    
    model = 'granite4:3b'
    messages = []

    input_prompt = input("Enter a prompt: ")
    messages.append({'role': 'user', 'content': input_prompt})
    
    while True:
        response = chat(
            model=model,
            messages=messages,
            tools=available_functions.values()
        )
        messages.append(response.message)
        print("content: ", response.message.content)
        if response.message.tool_calls:
            for tc in response.message.tool_calls:
                if tc.function.name in available_functions:
                    print(f"calling function {tc.function.name} with args {tc.function.arguments}\n")
                    result = available_functions[tc.function.name](**tc.function.arguments)
                    print(f"result: {result}\n")
                    messages.append({'role': 'tool', 'tool_name': tc.function.name, 'content': str(result)})
        else:
            break
    

Now, we can run the agent. Let’s give it a simple task to write a python script that generates the first 100 prime numbers and saves them to a text file.

% python agent.py
Enter a prompt: Write a python program primes.py that prints the first 100 Prime numbers to a text file called primes.txt. Ensure that the necessary imports are included. After saving the program, run it.
content:  
calling function write_python_script with args {'code': 'import math\n\ndef is_prime(n):\n    if n <= 1:\n        return False\n    for i in range(2, int(math.sqrt(n)) + 1):\n        if n % i == 0:\n            return False\n    return True\n\nprimes = []\nnum = 2\nwhile len(primes) < 100:\n    if is_prime(num):\n        primes.append(num)\n    num += 1\n\nwith open(\'primes.txt\', \'w\') as f:\n    for p in primes:\n        f.write(str(p) + "\\n")\n', 'filename': 'primes.py'}

result: primes.py

content:  
calling function run_python_script with args {'args': ['primes.py'], 'filename': 'primes.py'}

result: 

content:  The Python program `primes.py` has been successfully written and executed. It generated the first 100 prime numbers and printed them to the file **primes.txt**. The execution completed without any errors.

Success! The directory now contains two new files: the Python script primes.py written by the agent and the text file primes.txt produced when the agent ran the script.

The primes.py file contains the following code:

import math

def is_prime(n):
    if n <= 1:
        return False
    for i in range(2, int(math.sqrt(n)) + 1):
        if n % i == 0:
            return False
    return True

primes = []
num = 2
while len(primes) < 100:
    if is_prime(num):
        primes.append(num)
    num += 1

with open('primes.txt', 'w') as f:
    for p in primes:
        f.write(str(p) + "\n")

And primes.txt contains the first 100 prime numbers:

2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
211
223
227
229
233
239
241
251
257
263
269
271
277
281
283
293
307
311
313
317
331
337
347
349
353
359
367
373
379
383
389
397
401
409
419
421
431
433
439
443
449
457
461
463
467
479
487
491
499
503
509
521
523
541

Afterward, we can remove the Granite4 model from our local machine to free up space.

ollama rm granite4:3b