Following on from our recent focus on revenue assurance and auditability, this post looks at the technical transformation behind the scenes — how we rebuilt the gross margin reporting pipeline to make it faster, more resilient, and ready for the demands of high-frequency meter data.
From Fragile to Future-Ready
ENSEK’s gross margin reporting had grown fragile, relying on a legacy process that wasn’t keeping up. We saw late runs, patchy data, and too many manual fixes — not exactly confidence-inspiring when you’re dealing with millions in revenue. So we rebuilt it. We replaced the shaky old system with a robust automated pipeline on ENSEK’s modern data platform (our “Lakehouse” architecture). Guided by our commitment to Lives, not Just Loads, we eliminated points of failure, added transparency, and made the whole process faster and more resilient.
But we didn’t stop there. With the shift to high-frequency meter data looming — a regulatory shift to 48 meter readings per day per customer — we knew an even bigger data deluge was coming. Rather than wait and scramble, we fine-tuned our new pipeline ahead of time to handle the high-volume era. In this post, I’ll break down how we tackled both challenges — first fixing the foundation, then scaling up — and the impact it’s had on reliability, auditability, and financial control.
Why We Had to Rethink Gross Margin Reporting
Gross margin reporting calculates the difference between billed revenue and the cost of goods sold. It’s a core part of how energy suppliers track financial performance and ensure billing accuracy.
Our legacy gross margin process was starting to show its age. It occasionally ran late or underperformed, which meant confidence in the numbers wasn’t where it needed to be.
With the shift to high-frequency meter data on the horizon — a regulatory change that replaces monthly meter reads (if we’re lucky!) with 48 half-hourly data points per customer per day — we saw a clear opportunity. Rather than wait for pressure to mount, we acted early to make gross margin reporting more scalable, transparent, and resilient.
Building a Resilient Gross Margin Pipeline
The first step was migrating off the fragile legacy setup and onto a sturdier, scalable platform. We rebuilt the gross margin process using ENSEK’s cloud-based Lakehouse data platform. That meant automating the pipeline end-to-end — from data ingestion through to output — and eliminating the brittle hand-offs that had made the old process prone to failure.
We also removed external dependencies that had previously caused delays and moved to an incremental change model with distributed compute, so even as data grew, we could handle it in chunks and keep things moving quickly.
The result? A gross margin job that used to need babysitting now runs like clockwork. These changes have been in production for over a year, and the difference is night and day. The process is deliverable same day, and our team (and our clients’ finance teams) can start the day confident that yesterday’s margins are correctly calculated and on their desks.
Just as importantly, we built in much deeper transparency. The new pipeline doesn’t just spit out a final number — it also preserves all the intermediate data and decisions. If an auditor or a finance analyst asks, “Why is this margin figure what it is?”, we can now drill down to every component: what consumption data we used, which price, which adjustments — all of it. Under the old system, a question like that meant a lot of manual digging (and sometimes not ever getting a clear answer). Under the new one, we can trace and explain any number, often in minutes.
In short, we engineered out the flakiness. The gross margin process is now solid, transparent, and fast. And this was the crucial foundation we needed before tackling the next challenge: scaling for high-frequency meter data.
Scaling for High-Frequency Meter Data
Once we had a reliable daily process, we turned to the looming data explosion driven by the shift to high-frequency meter data. ENSEK went live with more granular settlement in early 2026. Instead of a meter reading every month or so per customer, we now receive 48 readings per customer per day. Multiply that by hundreds of thousands of meters, and you can imagine the scale of data flowing in.
To make sure we were prepared, I decided to put our system through its paces well before go-live. I took two years’ worth of synthetic half-hourly data — billions of data points — and ran them through our gross margin pipeline to simulate a future-state scenario. The first attempt was eye-opening: the job stalled and consumed far too many resources. In other words, if we did nothing, the volumes would have overwhelmed even our new pipeline. This early test was exactly the wake-up call we needed — and we got it at the right time, not when real data was on the line.
Armed with that insight, we went back into build mode. We reviewed the pipeline’s design and made targeted improvements to ensure it could handle the increased cadence and volume of data. Rather than adding complexity, we focused on simplifying and streamlining — removing inefficiencies and adapting the process to better reflect the expected rhythm of incoming data.
After these improvements, we ran the simulation again. This time, it sailed through. What previously choked the system now completed in under 10 minutes. It was a ridiculous improvement — in the best possible way. The pipeline could handle the 48x increase in data volume and then some, while remaining stable and cost-efficient under load.
Now that high-frequency data is flowing in, we haven’t had to scramble. The day the industry flipped the switch, we were already there — our systems just kept running. We continue to monitor and fine-tune (there are always a few quirks when something this big rolls out), but we didn’t need to make any fundamental changes at go-live. That’s the real payoff of future-proofing: when the future arrives, it’s business as usual.
What Makes Our Reporting Stand Out
All these changes haven’t just made the process faster and more scalable — they’ve made it better in terms of the value it delivers. Here’s what’s different about gross margin reporting after our overhaul:
- Richer Detail and Drill-Downs: The new system captures data at a granular level and retains it. Instead of ending the process with one big summary table, we now preserve transaction-level and meter-level detail. Want to see gross margin by product, by day, by region, or even by an individual account? That’s now possible. Internal teams and customers can explore the data more flexibly — slicing and filtering to answer specific questions. This level of visibility wasn’t achievable with the previous setup, which relied on more heavily aggregated outputs.
- Point-in-Time Accuracy: We’ve implemented point-in-time modelling, meaning we can recreate exactly what the gross margin picture looked like at any past date, using the data available at that time. This is particularly useful for audit trails and investigations. If someone asks, “What did our gross margin report show as of the end of last quarter, and why?”, we can reconstruct it — and, crucially, explain the inputs and logic behind it.
- Built-in Data Quality Checks: As part of the rebuild, we introduced a range of data quality and consistency checks. These act as early warning systems — flagging anomalies, inconsistencies, or potential revenue leakage before they become problems. For example, if there’s a sudden gap between expected and actual revenue, or if consumption data looks off, the system highlights it. These checks often surface issues with financial impact, helping teams prioritise fixes and prevent errors from going unnoticed. We’ve also built in logic to detect when incoming industry data doesn’t align with previously received information — a common and frustrating issue for suppliers. Our ability to flag and handle these mismatches quickly has become a real strength, and something we’re proud of.
- From Spreadsheets to Live Dashboards: Historically, gross margin outputs were delivered via static spreadsheets — functional, but limited in flexibility. Sigma, our interactive analytics platform, is now live across multiple teams and continuing to roll out more widely. It allows users to filter, explore, and customise views — for example, a finance manager might track KPIs, while an analyst investigates account-level anomalies. Even if we ultimately evolve the tooling, the direction is clear: away from static outputs and towards more dynamic, user-friendly reporting.
All of this makes gross margin reporting more than just a compliance requirement. It’s becoming a tool for insight and action — helping users understand and trust their revenue data, and identify issues or opportunities that would previously have gone unnoticed.
The Impact: Audit Confidence and Revenue Recovery
The changes weren’t just technical; they led to very tangible business outcomes:
- Auditability and Trust: The difference in auditability is like night and day. Previously, if auditors asked for proof behind a number, it could take our team hours — sometimes days — to compile logs and intermediate files, and even then we might have had gaps. Now, every calculation is traceable. We’ve already been through audit cycles where questions that once required manual investigation can now be answered on the spot by pulling up archived data. That level of transparency has made a real difference to how confidently we can stand behind the numbers.
- Revenue Protection: Improved reporting clarity has helped surface revenue leakage. One example: we identified an issue in how charge cancellations were being handled. In our billing system (Ignition), if a charge was cancelled and reissued in a certain inconsistent way, the re-billing didn’t always happen correctly — meaning the supplier might refund a customer but not recharge them. Our new gross margin pipeline, with its detailed checks, flagged this pattern. When we investigated, we found that one customer (a large energy retailer) had multiple instances of this scenario. Individually, each was small, but over time it added up. They’ve since begun work to address the issue and are exploring options to recover the under-billed revenue. It’s a strong example of how better data and transparency can drive action — surfacing issues that would previously have gone unnoticed.
- Operational Efficiency: The new system has freed up our teams from fire-fighting mode. Engineers (like me) aren’t waking up at 6am to check if the batch has failed. Margin analysts aren’t spending half their day reconciling and re-running numbers. Instead, they’re focused on what the numbers are telling us. That shift — from mechanical tasks to analytical work — means we can spend more time investigating anomalies, supporting decision-makers, and adding value.
In short, we’ve gone from a situation where gross margin reporting was a risk and a burden to one where it’s a source of assurance and insight. For our clients, it means they can trust the data we provide and use it to improve their own operations. For us at ENSEK, it strengthens our reputation as a partner that delivers reliable, forward-thinking solutions.
What’s Next: User Adoption and Life After the Transition
With the technical heavy lifting done, our attention has shifted to user adoption and continuous improvement. We want to make sure every relevant team is making the most of the enhancements now in place.
We’re supporting the transition away from the “classic” reports to the new platform outputs. Change management in reporting is as much a human challenge as a technical one — people get comfortable with established formats. Across the business, teams are helping users make the switch by providing side-by-side comparisons, updated dashboards, and support where needed. Our goal is to retire the legacy output entirely in the coming months. We’re pretty close: most users have already made the switch, and they’re not looking back.
Now that high-frequency meter data is flowing in, we’re staying vigilant — but we’re not in crisis mode. In fact, the early part of 2026 was deliberately kept light in terms of other projects so we could focus on monitoring and fine-tuning. As expected, there have been a few minor wrinkles as industry data starts flowing in new ways. We’ve been able to address those quickly — for example, adjusting to updated formats in the incoming data and collaborating with programme teams on any unexpected behaviours.
The key is that our core architecture held strong; no fundamental changes were needed. It’s a lot easier to tweak a parameter or two in a stable system than to design a whole new process under pressure. In other words, our early preparation meant go-live week was busy but calm — no fire-fighting, just close observation and pride in seeing the system do what it was built to do.
Looking further ahead, with high-frequency data now part of business-as-usual, we’re excited about what we can build on this foundation. More granular data opens the door to new possibilities: time-of-use pricing insights, more accurate accruals and forecasts, even predictive analytics on margin. Also on the roadmap is continuing to enrich the gross margin model itself.
We’ve been working on integrating more cost data (like network charges and other fees) into the platform so that eventually our gross margin report becomes a full profitability report — not just revenue and billing system reconciliation. That’s a natural next step, and one that some of our customers are already asking for — essentially extending the same rigour and transparency to the cost side of the equation.
Finally, we’re not losing sight of efficiency. With everything running in the cloud, we’re continually optimising to keep performance high and costs reasonable. Every improvement we make in code efficiency or resource usage is a win for us and our customers — especially in an era where cloud costs matter.
This journey — from fragile to future-ready — has been a rewarding one. We took a critical process that was held together with tape and string, and turned it into a strength. We’ve delivered something that’s faster, smarter, and built for whatever tomorrow brings. Most importantly, we did it in a way that never loses sight of why it matters: to give energy suppliers clarity and confidence in their revenue, so they can make better decisions.
And if there’s one takeaway I’m particularly proud of, it’s that we anticipated an industry-wide shift and acted proactively. By the time it arrived, we weren’t sweating — we were ready. That’s a great position to be in, and it’s exactly what we mean by “future-proofing”.
Note: ENSEK’s Financial Assurance capability is delivered through our Ignition platform and is not available as a standalone product. This ensures seamless data integration, governance, and audit-grade controls.
Discover how ENSEK’s Financial Assurance capabilities help energy suppliers reduce leakage, improve auditability, and build trust — all while preparing for what’s next.