Following on from our recent focus on revenue assurance and auditability, this post looks at the technical transformation behind the scenes — how we rebuilt the gross margin reporting pipeline to make it faster, more resilient, and ready for the demands of high-frequency meter data.
ENSEK’s gross margin reporting had grown fragile, relying on a legacy process that wasn’t keeping up. We saw late runs, patchy data, and too many manual fixes — not exactly confidence-inspiring when you’re dealing with millions in revenue. So we rebuilt it. We replaced the shaky old system with a robust automated pipeline on ENSEK’s modern data platform (our “Lakehouse” architecture). Guided by our commitment to Lives, not Just Loads, we eliminated points of failure, added transparency, and made the whole process faster and more resilient.
But we didn’t stop there. With the shift to high-frequency meter data looming — a regulatory shift to 48 meter readings per day per customer — we knew an even bigger data deluge was coming. Rather than wait and scramble, we fine-tuned our new pipeline ahead of time to handle the high-volume era. In this post, I’ll break down how we tackled both challenges — first fixing the foundation, then scaling up — and the impact it’s had on reliability, auditability, and financial control.
Gross margin reporting calculates the difference between billed revenue and the cost of goods sold. It’s a core part of how energy suppliers track financial performance and ensure billing accuracy.
Our legacy gross margin process was starting to show its age. It occasionally ran late or underperformed, which meant confidence in the numbers wasn’t where it needed to be.
With the shift to high-frequency meter data on the horizon — a regulatory change that replaces monthly meter reads (if we’re lucky!) with 48 half-hourly data points per customer per day — we saw a clear opportunity. Rather than wait for pressure to mount, we acted early to make gross margin reporting more scalable, transparent, and resilient.
The first step was migrating off the fragile legacy setup and onto a sturdier, scalable platform. We rebuilt the gross margin process using ENSEK’s cloud-based Lakehouse data platform. That meant automating the pipeline end-to-end — from data ingestion through to output — and eliminating the brittle hand-offs that had made the old process prone to failure.
We also removed external dependencies that had previously caused delays and moved to an incremental change model with distributed compute, so even as data grew, we could handle it in chunks and keep things moving quickly.
The result? A gross margin job that used to need babysitting now runs like clockwork. These changes have been in production for over a year, and the difference is night and day. The process is deliverable same day, and our team (and our clients’ finance teams) can start the day confident that yesterday’s margins are correctly calculated and on their desks.
Just as importantly, we built in much deeper transparency. The new pipeline doesn’t just spit out a final number — it also preserves all the intermediate data and decisions. If an auditor or a finance analyst asks, “Why is this margin figure what it is?”, we can now drill down to every component: what consumption data we used, which price, which adjustments — all of it. Under the old system, a question like that meant a lot of manual digging (and sometimes not ever getting a clear answer). Under the new one, we can trace and explain any number, often in minutes.
In short, we engineered out the flakiness. The gross margin process is now solid, transparent, and fast. And this was the crucial foundation we needed before tackling the next challenge: scaling for high-frequency meter data.
Once we had a reliable daily process, we turned to the looming data explosion driven by the shift to high-frequency meter data. ENSEK went live with more granular settlement in early 2026. Instead of a meter reading every month or so per customer, we now receive 48 readings per customer per day. Multiply that by hundreds of thousands of meters, and you can imagine the scale of data flowing in.
To make sure we were prepared, I decided to put our system through its paces well before go-live. I took two years’ worth of synthetic half-hourly data — billions of data points — and ran them through our gross margin pipeline to simulate a future-state scenario. The first attempt was eye-opening: the job stalled and consumed far too many resources. In other words, if we did nothing, the volumes would have overwhelmed even our new pipeline. This early test was exactly the wake-up call we needed — and we got it at the right time, not when real data was on the line.
Armed with that insight, we went back into build mode. We reviewed the pipeline’s design and made targeted improvements to ensure it could handle the increased cadence and volume of data. Rather than adding complexity, we focused on simplifying and streamlining — removing inefficiencies and adapting the process to better reflect the expected rhythm of incoming data.
After these improvements, we ran the simulation again. This time, it sailed through. What previously choked the system now completed in under 10 minutes. It was a ridiculous improvement — in the best possible way. The pipeline could handle the 48x increase in data volume and then some, while remaining stable and cost-efficient under load.
Now that high-frequency data is flowing in, we haven’t had to scramble. The day the industry flipped the switch, we were already there — our systems just kept running. We continue to monitor and fine-tune (there are always a few quirks when something this big rolls out), but we didn’t need to make any fundamental changes at go-live. That’s the real payoff of future-proofing: when the future arrives, it’s business as usual.
All these changes haven’t just made the process faster and more scalable — they’ve made it better in terms of the value it delivers. Here’s what’s different about gross margin reporting after our overhaul:
All of this makes gross margin reporting more than just a compliance requirement. It’s becoming a tool for insight and action — helping users understand and trust their revenue data, and identify issues or opportunities that would previously have gone unnoticed.
The changes weren’t just technical; they led to very tangible business outcomes:
In short, we’ve gone from a situation where gross margin reporting was a risk and a burden to one where it’s a source of assurance and insight. For our clients, it means they can trust the data we provide and use it to improve their own operations. For us at ENSEK, it strengthens our reputation as a partner that delivers reliable, forward-thinking solutions.
With the technical heavy lifting done, our attention has shifted to user adoption and continuous improvement. We want to make sure every relevant team is making the most of the enhancements now in place.
We’re supporting the transition away from the “classic” reports to the new platform outputs. Change management in reporting is as much a human challenge as a technical one — people get comfortable with established formats. Across the business, teams are helping users make the switch by providing side-by-side comparisons, updated dashboards, and support where needed. Our goal is to retire the legacy output entirely in the coming months. We’re pretty close: most users have already made the switch, and they’re not looking back.
Now that high-frequency meter data is flowing in, we’re staying vigilant — but we’re not in crisis mode. In fact, the early part of 2026 was deliberately kept light in terms of other projects so we could focus on monitoring and fine-tuning. As expected, there have been a few minor wrinkles as industry data starts flowing in new ways. We’ve been able to address those quickly — for example, adjusting to updated formats in the incoming data and collaborating with programme teams on any unexpected behaviours.
The key is that our core architecture held strong; no fundamental changes were needed. It’s a lot easier to tweak a parameter or two in a stable system than to design a whole new process under pressure. In other words, our early preparation meant go-live week was busy but calm — no fire-fighting, just close observation and pride in seeing the system do what it was built to do.
Looking further ahead, with high-frequency data now part of business-as-usual, we’re excited about what we can build on this foundation. More granular data opens the door to new possibilities: time-of-use pricing insights, more accurate accruals and forecasts, even predictive analytics on margin. Also on the roadmap is continuing to enrich the gross margin model itself.
We’ve been working on integrating more cost data (like network charges and other fees) into the platform so that eventually our gross margin report becomes a full profitability report — not just revenue and billing system reconciliation. That’s a natural next step, and one that some of our customers are already asking for — essentially extending the same rigour and transparency to the cost side of the equation.
Finally, we’re not losing sight of efficiency. With everything running in the cloud, we’re continually optimising to keep performance high and costs reasonable. Every improvement we make in code efficiency or resource usage is a win for us and our customers — especially in an era where cloud costs matter.
This journey — from fragile to future-ready — has been a rewarding one. We took a critical process that was held together with tape and string, and turned it into a strength. We’ve delivered something that’s faster, smarter, and built for whatever tomorrow brings. Most importantly, we did it in a way that never loses sight of why it matters: to give energy suppliers clarity and confidence in their revenue, so they can make better decisions.
And if there’s one takeaway I’m particularly proud of, it’s that we anticipated an industry-wide shift and acted proactively. By the time it arrived, we weren’t sweating — we were ready. That’s a great position to be in, and it’s exactly what we mean by “future-proofing”.
Note: ENSEK’s Financial Assurance capability is delivered through our Ignition platform and is not available as a standalone product. This ensures seamless data integration, governance, and audit-grade controls.