Skip to main content

Replacing Pipenv with UV and Taskfile for Python Ops

·802 words·4 mins
Stanislav Cherkasov
Author
Stanislav Cherkasov
{DevOps,DevSecOps,Platform} Engineer
Table of Contents
tools - This article is part of a series.
Part : This Article

I stuck with Pipenv for a long time. Like, a really long time.

It was my comfort zone. It did the two things I cared about: it locked my dependencies (eventually) and it gave me a place to stash my ugly one-liner scripts. The “deps + scripts” model was simple, and simple is usually good.

But priorities change. I got tired of waiting for the resolver to finish its coffee, and I wanted a workflow entry point that didn’t care if it was running on my laptop or a disposable CI runner.

Enter uv (for speed) and Taskfile (for sanity). Here is how I swapped them out without losing that “one command to rule them all” feeling.

TL;DR
#

I traded this:

  • Pipfile & Pipfile.lock (slow)
  • Pipfile [scripts] (fine..buuuuut…..)

For this:

  • pyproject.toml & uv.lock (standard + supported by renovate/dependabot)
  • uv sync (blazing fast)
  • Taskfile.yml (powerful..aaand “CI-agnostic”)

The Tools
#

uv: The Speed Demon
#

If you haven’t used uv yet, you are in for a treat. It manages virtual environments, resolves dependencies, and runs commands.

What I actually use it for:

  • uv sync: Creates or updates .venv from the lockfile. And it does it before you can alt-tab away.
  • uv run: Executes Ansible or generic Python scripts inside the environment without the ceremony of source .venv/bin/activate.

The benefit is raw speed. Syncing feels instantaneous compared to the “Pipenv lock…” pause I used to dread.

Taskfile: The Orchestrator
#

Taskfile is a tiny Go binary that runs tasks defined in YAML. Think of it as Make, but readable by humans born after 1990.

Why it fits Ops repos:

  • Single Entry Point: task lint, task deploy. No more “README archaeology” to find the right command.
  • CI Agnostic: It works the same locally as it does in GitHub Actions or GitLab CI.
  • Zero Runtime: It’s a static binary. You don’t need Python/Node/Ruby installed just to run your task runner.

Why I left Pipenv
#

I want to be clear: this isn’t a “Pipenv is bad” hit piece. Pipenv served me well. It kept my environments isolated and my commands memorable.

But operational strategy is about removing friction.

The setup I want today optimizes for:

  1. Fast Feedback: Waiting for dependency resolution breaks flow.
  2. Explicit Contracts: I want a repo to say “run this to work,” regardless of the machine.
  3. Self-Hosted Friendliness: Hosted runners change, limits tighten. I want a workflow that survives platform shifts.

The Result: uv + Taskfile
#

My repo now revolves around three files:

  • pyproject.toml (What I need)
  • uv.lock (What I use)
  • Taskfile.yml (How I work)

The mental model is simple: uv owns Python, Taskfile owns execution.

The Taskfile Strategy
#

This is the shape I ended up with. Notice how dependencies are a first-class citizen (deps), and everything runs through uv run.

# yaml-language-server: $schema=https://taskfile.dev/schema.json

version: "3"

vars:
  playbook: playbook_install.yml

tasks:
  deps:
    desc: Sync dependencies with uv
    sources:
      - pyproject.toml
      - uv.lock
    generates:
      - .venv/pyvenv.cfg
    cmds:
      - uv sync --all-extras --dev
    silent: true

  default:
    desc: Run the main playbook
    deps: [deps]
    cmds:
      - uv run ansible-playbook {{.playbook}}

  lint:
    desc: Run Ansible lint
    deps: [deps]
    cmds:
      - uv run ansible-lint {{.playbook}}

  upgrade:
    desc: Upgrade all dependencies
    cmds:
      - uv sync --upgrade

Why separating deps matters
#

Pipenv made environment creation implicit. It felt like magic, until it broke.

In Taskfile, I prefer it explicit. Every task that needs Python depends on deps. This makes failures clear and keeps the state consistent. Plus, with the sources and generates check, Taskfile is smart enough to skip uv sync if nothing changed.

Migration Steps
#

I followed an “automate first, refine later” approach.

1. Convert the Project
#

From the repo root, I let uv do the heavy lifting:

uvx migrate-to-uv

This generated a valid pyproject.toml and uv.lock from my existing Pipfiles.

2. Review Dependencies
#

I took a moment to clean up. Do I really need that library I added for a one-off script three years ago? Probably not. I separated true main dependencies from dev dependencies.

3. Move Scripts to Tasks
#

I mapped my old Pipfile scripts to Taskfile tasks.

  • pipenv run ansible-playbook -> task default
  • pipenv run lint -> task lint

4. Search and Replace
#

I grepped for pipenv run in my CI configs and documentation, replacing it with uv run.

The Verdict
#

The improvements were immediate:

  • Speed: My CI pipelines dropped seconds just on the setup step.
  • Clarity: The Taskfile.yml acts as self-documenting code. New engineers don’t ask “how do I run this?”, they just type task --list.
  • Portability: I can run this on a fresh fedora VM, a macOS laptop, or a GitHub Runner, and it behaves exactly the same.

Tools should serve the workflow, not the other way around. Changing to uv wasn’t just about chasing the new shiny thing; it was about respecting my own time.

tools - This article is part of a series.
Part : This Article