Built by Daniel Taylor
Programme intelligence, encoded into software. Built from 10+ years of managing what existing tools were never designed to handle.
The Origin
Every programme management tool I’ve used solves the same narrow problem: tracking tasks against timelines. But the real questions in programme leadership cut across dimensions — a dependency slip has a budget impact, a resource over-allocation has a delivery cost, a risk has a financial exposure, and none of these live in the same spreadsheet.
I spent years stitching together partial views from tools that couldn’t talk to each other. Health assessments done manually. Steering committee decks assembled from five different sources. What-if questions answered with gut feel instead of data. After my last role, I decided to build what should have existed all along — a system where every dimension of programme health is connected, queryable, and visible in one place.
The Platform
PRAXIS encodes programme management methodology into a knowledge graph, then uses AI to interrogate it. Every piece of data is queryable in financial, schedule, resource, risk, dependency, and value dimensions simultaneously. This is the connected intelligence that programme leaders have been building manually in spreadsheets for decades.
Automated Health Assessment
102 methodology constraints execute against project data automatically. Not traffic-light dashboards someone manually set to green — genuine health scoring with category breakdowns, critical findings, and actionable guidance. The kind of assessment that used to take me half a day to compile, generated in under 30 seconds.
AI-Powered Analysis
62 specialised AI tools with direct access to the project knowledge graph. Not a chatbot with your plan pasted in — structured analysis that traces dependency chains, simulates delays, detects resource conflicts, and explains findings in plain English. “What happens if the API migration slips by 3 weeks?” returns every downstream task affected, every resource conflict created, every milestone at risk.
Executive Reporting
I’ve spent hundreds of hours assembling steering committee decks from scattered sources. PRAXIS generates branded PDF and PowerPoint reports instantly — health scores, findings, trends, and recommendations ready for exec review. The hours I used to spend on reporting mechanics are now spent on the analysis and judgement calls that actually move programmes forward.
Under the Hood
Every technology choice in PRAXIS reflects a lesson learned from doing this work at scale. These aren’t stack decisions made from a blog post — they’re the result of understanding what programme data actually looks like and how programme leaders actually need to use it.
Programme data is inherently connected — initiatives depend on other initiatives, resources work across teams, risks cascade through dependency chains. A graph database models these relationships natively instead of forcing them into rows and columns. When you ask “what’s affected if this slips?”, the answer is a graph traversal, not a VLOOKUP.
62 tools need to run fast and frequently across large datasets. Haiku gives the right balance of intelligence and cost efficiency for high-volume structured analysis — not every query needs the most powerful model. The AI has real tools and real data to interrogate, not just text to summarise.
Programme leads check health mid-meeting, between calls, in the corridor before a SteerCo. FastAPI handles complex graph queries with minimal latency. React delivers a responsive interface that works as fast as the decisions it supports. Speed isn’t a technical preference — it’s a usability requirement.
The Point
Building PRAXIS wasn’t about proving a point. It was the natural consequence of years spent solving these problems manually and knowing there had to be a better way. The methodology isn’t theoretical — it’s been stress-tested against real data structures, real edge cases, and real constraints. The platform is the most complete expression of how I think about programme leadership.