Excel-to-API in a Weekend: Turning a Spreadsheet Model into a Microservice
The model has been running for three years. It calculates the levered IRR on infrastructure investments — entry multiple, debt structure, revenue growth, exit assumptions, all of it — and it does so with a level of accuracy and analytical nuance that took a senior analyst two months to build and another six months of iteration to refine. Everyone trusts it. The investment committee trusts it. The portfolio management team trusts it. And so, when a junior analyst on the deal team needs an IRR for a new target, they email the model owner, who is in the middle of something else entirely, who opens the model, changes the inputs, notes the output, and emails back. When the associate building the client portal wants to show indicative return scenarios, they either hard-code numbers or rebuild a simplified version of the model logic in JavaScript that has already diverged from the canonical version in ways nobody has fully tracked. When the FP&A team wants to run fifty sensitivity scenarios for a board presentation, they manually change inputs fifty times and copy outputs into a table, a process that takes two hours and introduces transcription errors at roughly the rate you would expect from two hours of manual copy-and-paste work.
Every one of these friction points shares a root cause: the model is not callable. It exists as a file, opened by humans, operated by humans, returning results to humans through the clipboard. The logic is excellent. The accessibility architecture is 1997. Wrapping that model behind an API endpoint — a web address that accepts input parameters and returns calculated outputs — transforms it from a file that must be manually operated into a service that any application, any spreadsheet, any portal, and any automated pipeline can query programmatically. The business logic stays exactly where it is, maintained by the people who understand it, versioned and governed the same way it always was. What changes is the interface: instead of "open the file and change the inputs," the interaction becomes "send a request with the inputs and receive the result." The migration from file to microservice takes a focused weekend, requires no cloud infrastructure expertise, and does not touch a single formula in the original model.
The blueprint begins with understanding what the model actually needs at its boundaries: what inputs it requires and what outputs it produces. This boundary definition becomes the API contract — the formal specification of what the service accepts and what it returns. For an IRR model, the inputs might be entry enterprise value, entry year, exit multiple, exit year, revenue CAGR, EBITDA margin profile, and debt structure parameters. The outputs are levered IRR, equity multiple, and perhaps a simplified cash flow summary. Every input has a type — numeric, string, boolean — a valid range, and a default value for optional parameters. Every output has a type and a defined unit. This specification is documented before a line of code is written, because it drives every subsequent design decision and because it becomes the reference document that every downstream consumer of the API uses to build their integration.
The Python framework for exposing the model as a web service is FastAPI, which has become the standard choice for this category of application due to its combination of performance, automatic documentation generation, and native support for type-annotated request and response schemas. A FastAPI application for an Excel model microservice requires surprisingly little code. The application defines a Pydantic model — a Python class with typed fields — representing the request body, another Pydantic model representing the response body, and a single endpoint function decorated with the route path and HTTP method that accepts the request, invokes the model calculation logic, and returns the response. FastAPI automatically validates incoming requests against the request schema, returning descriptive error messages for any parameter that is missing, out of range, or of the wrong type, before the calculation function is ever called. It also generates an interactive API documentation page at /docs that allows any user with network access to the service to explore the endpoint, construct a test request with custom inputs, and inspect the response — a capability that dramatically accelerates integration work for downstream consumers.
The calculation logic that sits behind the endpoint can take one of two forms depending on the model's characteristics and the requirements for calculation fidelity. The first approach extracts the model's computational logic from Excel entirely, reimplementing it as native Python functions using numpy for numerical operations and numpy_financial for financial functions like IRR and NPV. This approach produces the fastest API response times, eliminates any dependency on the Excel application being installed on the server, and enables the calculation logic to be unit-tested independently using the Python testing framework described in an earlier post in this series. It requires the most upfront effort — carefully translating the model's formula logic into equivalent Python — and carries a validation obligation: the Python implementation must be verified to produce outputs that agree with the original Excel model to within acceptable tolerance across a representative set of test cases before it is deployed as a production service. The second approach uses xlwings to drive a running Excel instance on the server, setting input cells programmatically, triggering recalculation, and reading output cells — effectively automating the same operations a human would perform manually. This approach preserves the original model's calculation logic with perfect fidelity, requires no translation work, and is the appropriate choice for models whose complexity makes full Python reimplementation impractical within a weekend timeframe. It requires Excel to be licensed and running on the deployment machine, which is a meaningful infrastructure constraint, but for a first deployment within an organization that already has Excel licenses, it is often the pragmatic path to a working service in the shortest possible time.
Containerization with Docker is the step that transforms the FastAPI application from a script that runs on a specific machine into a portable, deployable service that can run anywhere. A Dockerfile for an Excel microservice is straightforward: it specifies a Python base image, copies the application code and any required model files into the container, installs the Python dependencies from a requirements file, and defines the command that starts the FastAPI server. Building and running the Docker container produces a service accessible at a local port that behaves identically to how it will behave when deployed to a cloud environment. For the Python-native calculation approach, the container is fully self-contained and genuinely portable. For the xlwings approach, the container requires a Windows base image with Excel installed, which constrains deployment options but remains workable within most organizational infrastructure environments.
Authentication and input validation are the two security controls that must be in place before the service receives requests from outside the developer's own machine. Authentication is implemented using API key headers — a simple scheme where each authorized consumer is issued a unique key that must be included in every request, validated by the service before the calculation runs, and revocable without affecting other consumers. Input validation is handled natively by FastAPI's Pydantic integration, but the validation rules must be calibrated to the model's actual input constraints: the IRR model that produces nonsensical outputs for negative entry multiples should have a Pydantic validator that rejects negative entry multiple values with an informative error message rather than returning a result the consumer might use without realizing it is arithmetically absurd.
With the service running and authenticated, the consumption patterns that previously required manual intervention become trivial to automate. The associate's client portal sends a POST request with scenario parameters and receives the IRR and equity multiple in the response body, displaying them dynamically without any model owner involvement. The FP&A team's sensitivity analysis runs fifty scenarios by sending fifty parallel API requests from a Python script and collecting the results into a DataFrame that writes directly into the board presentation workbook. The other Excel models that need this calculation add a single =WEBSERVICE() call or a VBA HTTP request function that queries the endpoint at refresh time, eliminating the manually maintained copies of duplicated logic. The model owner spends their time maintaining and improving one authoritative version of the calculation rather than fielding emails from everyone who needs it.
This is exactly the kind of architectural leverage that Cell Fusion Solutions specializes in building for finance teams. We take the analytical models your organization has invested years refining and make them programmatically accessible — wrapping them in FastAPI microservices, containerizing them for portable deployment, and integrating them into the portals, dashboards, and downstream workbooks that need their outputs — so that the logic your team trusts serves the entire organization rather than sitting behind a file that only one person knows how to operate. If you have a model that everyone needs and nobody can easily access, Cell Fusion Solutions can turn it into a service by Monday morning.