RevPerfect

Revenue blog · 11 min read · 6 June 2026

Forecast accuracy is the wrong metric. Demand calendar accuracy is the right one.

Written by Arshad Kacchi, Founder & CEO of RevPerfect — Perth.

RevPerfect blog: forecast accuracy is the wrong metric — demand calendar accuracy is the right one.

An RMS deck I sat through in 2024 opened with a single bold figure: 96.4 percent forecast accuracy, trailing 90 days. The room nodded. Six minutes later we got to the slide that mattered — the four compression Fridays in the quarter — and the same model had under-called every one of them by between 14 and 22 rooms. The trailing-90-day headline had averaged across forty quiet midweeks and four loud Fridays, and the noise of the easy dates had buried the signal on the hard ones. Hotel forecast accuracy was high on the slide. The pricing decisions on the four dates that paid the bills had been wrong every time. This piece is the metric I now put on the cover slide instead, and how to build it.

What hotel forecast accuracy actually means in 2026

Forecast accuracy is the closeness of a forecast value to the actual outcome, averaged across some period and expressed as a single number. The three statistics most properties use are mean absolute percentage error (MAPE), root mean squared error (RMSE), and flat percentage variance. Each has a different bias. RMSE penalises large misses disproportionately. Flat variance is the easiest to game.

The harder problem is not which statistic to use. It is what the statistic is doing once you compute it. A 96 percent accuracy figure across a quarter is an average of about ninety daily error values. Inside that average sits a distribution. A tight distribution around the mean is one story. A bimodal distribution with quiet-date precision and compression-date misses is a completely different story. The headline does not distinguish between them.

This is why the broader forecasting habit covered in hotel demand forecasting treats the rooms number as one of three layers, not the layer itself. The rooms number feeds staffing and F&B. The demand calendar feeds pricing. They are not the same surface and they do not collapse into one metric.

The arithmetic — and the worked example that shows why the headline lies

Two simple working definitions to keep the rest of this clean. Forecast accuracy at a date is one minus the absolute percentage error: 1 − |Forecast − Actual| ÷ Actual. Demand calendar accuracy at a date is binary — the date was either classified into the correct demand band at the chosen lead time, or it was not.

A model can post 96 percent forecast accuracy and 58 percent demand calendar accuracy on the same dataset. The first is the easy half of the job. The second is the half that decides RevPAR.

A worked example on a 120-room urban property, one quarter, 90 trading days. The forecast at the 14-day-out checkpoint produced a rooms number per date. The actuals came in. The trailing-90 MAPE was 4.2 percent — a 95.8 percent accuracy headline.

The decomposition. Sixty-eight of the ninety dates were quiet midweek or shoulder-weekend nights with stable demand and small absolute errors. Mean absolute error on those: 1.8 rooms per night. Twenty-two of the ninety dates were either weekend or event-influenced. Mean absolute error on those: 8.4 rooms per night. Of those twenty-two, six were properly compressed nights where the building ended up at 100 percent. Mean absolute error on the compressed six: 14.3 rooms — systematically under-called.

What the headline missed: the model got the easy dates right and got the dates that drive pricing wrong, by a lot. A correctly anticipated compression Friday could have supported a $40 rate lift. Six compressed dates at +$40 on roughly 100 sold rooms is $24,000 of RevPAR per quarter the model quietly left on the floor.

Now run demand calendar accuracy on the same dataset. The 14-day-out classification put each date into one of four bands: low, normal, high, compression. Sixty-eight of the ninety were classified correctly. Demand calendar accuracy: 75.6 percent. The four compressed Fridays were classified as high instead of compression at the 14-day mark. Same dataset, two completely different stories.

Where headline forecast accuracy breaks down

Four failure modes show up across the properties I have worked with. None of them are flaws in the underlying maths. They are flaws in how the metric is read in the room.

2 — the absolute error is the wrong unit on a high-occupancy date. A four-room miss on a 60-percent-occupied Tuesday is operationally trivial. A four-room miss on a 96-percent-occupied compression Saturday is the difference between holding the rate and dropping it. Percentage error treats both as equivalent. They are not. The pricing decision depends on which side of the compression line the property ended up on, not on the absolute room delta.

3 — the accuracy metric and the pricing surface are different. The rooms number is for payroll, F&B, and operational staffing. The demand band is for pricing, restrictions, and channel posture. Optimising the model for headline accuracy means optimising for the wrong surface. The compression-date misses are a feature of optimising for total RMSE, not a bug.

4 — accuracy reads as a verdict, not a question. A 96 percent headline closes the conversation. A 76 percent demand calendar accuracy opens it — which dates were misclassified, which direction, what override should have caught it. The metric stack referenced in ADR vs RevPAR vs GOPPAR exists for the same reason: every headline number is the start of the read, not the end.

What to do about it — the five-step demand calendar accuracy ritual

The monthly sequence I run on every property that has a clean forward forecast and at least one full quarter of actualised data. Forty minutes when the snapshot history is clean, ninety when it is not. The ritual sits inside the broader pickup-and-pace habit covered in hotel pickup and pace explained.

  1. Lock the demand band definitions. Four bands is the operating sweet spot — low, normal, high, compression. Each band gets a hard occupancy threshold (under 55 percent, 55 to 80 percent, 80 to 95 percent, 95 percent and above) and an associated posture. The thresholds are written down. Nobody negotiates them mid-month.
  2. Capture the 14-day-out classification. One row per future date in the next 90. One column for the assigned band at the 14-day-out checkpoint. The snapshot has to be frozen on the day — not reconstructed from memory. The daily capture is what makes the monthly review honest.
  3. Record the actualised band for every past date. When a date closes, the actualised band is computed from the final on-the-books and posted. The pair (assigned, actualised) is the unit of measurement.
  4. Decompose the misses into three categories. One band low — the date ran hotter. One band high — the date ran softer. Two-plus bands off — structurally wrong, needs a written override. The mix matters: 75 percent accuracy with one-band-low misses on compression dates is a different problem than 75 percent with one-band-high misses on shoulder dates.
  5. Put the demand calendar accuracy line on the cover slide. Headline rooms accuracy moves to the appendix. The demand calendar line carries the percentage, the trailing-three-month trajectory, and a one-line decomposition. The discussion that follows is where the next month's posture gets decided.

The ritual is boring on any given month. The compound across four quarters is where the value lives. A property running this monthly typically moves from sub-70 to 78 to 82 percent demand calendar accuracy across the first year, and the misses that remain start clustering on the hardest dates.

Demand bands, demand calendar, and the operator question they answer

A short comparison table to keep the two metrics side by side:

Forecast accuracy (MAPE)Demand calendar accuracy
What it measuresCloseness of forecast rooms to actual roomsWhether each date was classified into the correct demand band
UnitPercentage error on roomsPercentage of dates correctly classified
Best forStaffing, payroll, F&B productionPricing, restrictions, channel posture
Failure modeHeadline rewards quiet dates, hides compression missesRequires honest 14-day-out snapshot capture
Typical operating range92–98 percent MAPE on a stable property70–82 percent at the 14-day window
Where it livesAppendix slide, operational planning packCover slide, revenue read

Both belong on the slide deck. Only one belongs on the cover.

A real scenario: 140-key CBD, one quarter, two completely different stories

A 140-key CBD property I worked with through 2024. The trailing-90-day RMS headline on rooms was 96.1 percent. The deck rolled into quarterly owner review under the assumption forecasting was working.

The decomposition. Sixty-one normal-band trading days, mean absolute error 1.4 rooms. Twenty-three high-band weekend or event dates, mean absolute error 6.2 rooms. Six dates ended in compression, and the model had classified them as high-band at the 14-day checkpoint. Mean absolute error on those six: 16.8 rooms, systematically under-called. Demand calendar accuracy: 71 percent.

The override was a written rule: any date forecast at high-band that also showed positive STLY at the 14-day-out checkpoint and any one of three compression signals (event proximity, pickup velocity above seasonal mean, comp-set BAR lift) escalates to compression-band manually. Across the next quarter, headline accuracy still posted 95.6 percent. Demand calendar accuracy lifted from 71 to 79 percent. Three of four compression Fridays were correctly classified at the 14-day mark, the fourth caught at the 7-day re-check. Blended ADR on the three lifted A$28. RevPAR uplift across the four compression dates: roughly A$11,200 net of distribution cost. Same building. Same model. Different metric on the cover slide.

How demand calendar accuracy ties into the rest of the stack

Demand calendar accuracy is a forecast-quality metric, but the work it forces is the same as the work covered in hotel revenue management strategies for 2026: write the rules down, run the same routine every period, and let the override log compound. The metric is the surface. The override log is the substance.

Macro context still matters. The Australian Bureau of Statistics short-term visitor arrivals and the Tourism Research Australia domestic outlook distinguish a property-specific problem from a market-wide one.

FAQ — forecast accuracy and demand calendar accuracy

What is hotel forecast accuracy?

The closeness of forecast rooms, occupancy, ADR, or revenue to the actual outcome across a period, expressed as one of three statistics — mean absolute percentage error (MAPE), root mean squared error (RMSE), or a flat percentage variance. Each one has a different sensitivity to outliers and a different operating bias.

Why is forecast accuracy the wrong primary metric?

The headline averages across every date in the period. A property that calls ten quiet midweeks correctly and misses four compression Fridays can still post 96 percent accuracy. The compression dates are where the pricing decisions actually pay back. The headline rewards calling the easy dates and hides the misses on the dates that drive RevPAR.

What is demand calendar accuracy?

The percentage of future dates correctly classified into demand bands — low, normal, high, compression — at a defined lead time, usually 7 or 14 days out. It measures whether the forecast informed the right operating posture for each date, not whether the absolute rooms number was right to one decimal place.

How do I calculate demand calendar accuracy?

For each future date in the period, record the demand band you assigned at the 7 or 14-day-out checkpoint and the demand band the date actually ran at. Divide the count of matching pairs by the total count of dates. A 78 percent demand calendar accuracy means 78 percent of dates were classified into the band they actually delivered.

What is a good demand calendar accuracy target?

There is no universal number. A stable urban property with a clean year of snapshot history usually settles between 70 and 82 percent at the 14-day window. Resort and event-led properties run lower. The trajectory matters more than the level: a property moving from 62 to 74 percent over six months is doing the work, regardless of the absolute number.

Should I stop measuring forecast accuracy?

No. Forecast accuracy still earns a place on the long-form audit slide because the absolute rooms number is the input into staffing, payroll, and F&B production planning. The point is that demand calendar accuracy belongs on the cover slide. Forecast accuracy belongs in the appendix.

How often should demand calendar accuracy be reviewed?

Monthly at minimum, on a rolling 90-day window. A weekly read on the prior week's dates is the discipline that turns the metric into a learning loop. The override log written through the week is the input. The accuracy table is the output. The pair is what teaches the demand calendar over time.

A note on what this is for

Headline hotel forecast accuracy is the easiest metric in revenue management to look good on. Average across enough quiet dates and the number lands between 93 and 97 percent on almost any property. It reassures the room. It rarely informs the next pricing decision. Demand calendar accuracy is harder to compute, lower in absolute terms, and more honest about which dates the property is reading correctly. The first time a deck opens with 74 percent demand calendar accuracy instead of 96 percent forecast accuracy, the conversation that follows is materially different.

That discipline is what we built RevPerfect for: a demand calendar that classifies every future date into a band at the 14-day-out checkpoint, captures the assigned band as a frozen snapshot, scores the band against the actualised result when the date closes, and surfaces the override log as the working artefact alongside the accuracy table. One input into the broader forecasting habit covered in hotel demand forecasting, but the demand calendar accuracy view is where most desks find the first measurable lift because the metric forces the right work. Try RevPerfect free → or book a 20-minute walkthrough.

Written by - Arshad Kacchi - Founder & CEO RevPerfect