Revenue blog · 11 min read · 22 June 2026
How to build a hotel demand calendar in 90 minutes
The most expensive artefact on most revenue desks does not exist. The forecast lives in the RMS. The pace report lives in the PMS. The event diary lives in a shared inbox. The comp-set read lives on a screenshot. None of them, individually, tell you what to do about Saturday three weeks from now. How to build a hotel demand calendar is the question that pulls those four artefacts onto one dated grid — and once they are on one grid, the pricing decision starts answering itself. This is the 90-minute first-pass build, the five inputs, the four bands, the override log, and the monthly refresh ritual I have run on every property I have worked with.
What a hotel demand calendar actually is in 2026
A demand calendar is a forward-looking, dated grid. One row per future trading date. One column for the assigned demand band at a defined lead time. Sitting beside it: the inputs that produced the band, the pricing posture the band requires, and the override log entry if a revenue manager overrode the system read. That is the whole artefact.
It is not a forecast. The forecast produces a rooms number. The demand calendar produces a posture. A 96-room forecast on a 120-room building is one piece of information. The same property classified as compression at the 14-day-out checkpoint is a different piece of information — and the second one is what tells the desk to close discount channels and lift the rate.
The broader forecasting habit covered in hotel demand forecasting treats the rooms number, the demand calendar, and the override log as three layers of the same workflow. Most desks build the first. Fewer build the second. Almost none build the third.
The four bands and the five inputs that drive them
Start with the bands. Four is the operating sweet spot — narrow enough to discriminate, broad enough that nobody negotiates the boundary on a Wednesday morning. The thresholds below are the defaults I have seen travel across CBD, resort, and conference properties. Tune them once for the building you are pricing, then leave them alone.
- Low — under 55 percent on-the-books occupancy at the 14-day-out checkpoint. Posture: every channel open, all promo-rate buckets active, no minimum-stay restrictions, BAR sits at floor.
- Normal — 55 to 80 percent. Posture: standard channel mix, promo-rate buckets active selectively, BAR at the seasonal mid-point.
- High — 80 to 95 percent. Posture: discount channels close in stages, minimum-stay considered on shoulder dates, BAR lifted to the upper quartile of the rate-spread.
- Compression — 95 percent and above. Posture: discount channels closed, BAR at ceiling, two-night minimum considered on the arrival side, wholesale allocations capped.
The five inputs that drive the band classification:
- On-the-books occupancy at the 14-day-out checkpoint. The single biggest signal. A property already sitting at 78 percent fourteen days out, on a date that historically arrives at 89 percent, is reading compression.
- Pickup velocity over the trailing 7 days. Rooms picked up in the last week, relative to the seasonal mean. Above the mean is a positive signal; below is a negative one. The same approach is unpacked in hotel pickup and pace explained.
- Same-time-last-year pacing. On-the-books today versus on-the-books the same number of days out a year ago, for the matching arrival weekday — not the matching date. The reading is unpacked in what is STLY pickup.
- Event proximity. Citywide events, sport, conference, public holiday. A binary input. Either the building is within the demand-pull radius or it is not.
- Comp-set BAR direction over the prior 72 hours. The market either lifted, held, or softened. A lift on a date already running hot is a compression confirmation. A soft market on a date the property is calling high is the first signal to demote the band.
None of the five is sufficient alone. The combination is what turns the forecast into a band.
The 90-minute build — what to do, in order
Block 90 minutes. Close the inbox. The build is best done on a Tuesday or Wednesday when the front-of-window is quiet and the back-of-window is stable. Three thirty-minute blocks.
Block one — define the bands and thresholds (0 to 30 minutes). Open a fresh spreadsheet. Write down the four band names, the occupancy thresholds, the pricing posture, and the restriction posture. The thresholds get written once and re-audited monthly. The artefact for this block is a one-page reference card pinned to the top of the working file.
Block two — ingest the next 90 days of forward data (30 to 60 minutes). One row per future date. Columns: arrival weekday, on-the-books rooms, on-the-books occupancy percentage, trailing-7-day pickup, same-time-last-year on-the-books for the matching arrival weekday, event flag, comp-set BAR direction. The data lives in the PMS, the snapshot history, and the event diary.
Block three — classify every date and write the override log (60 to 90 minutes). For each row, apply the threshold table. Most dates will classify cleanly. Five to fifteen percent will sit on a boundary — the on-the-books says normal but the pickup velocity says high, or the same-time-last-year is positive but the comp-set is soft. Those get the override. Every override gets a row in the log: the date, the system band, the assigned band, the trigger signal, and the closing band when the date arrives.
Where the demand calendar breaks down — four failure modes
A demand calendar without an override log is a wall poster. A demand calendar with an override log is a working artefact. The difference shows up in the third month, not the first.
Four failure modes show up across the properties I have worked with. None of them are flaws in the underlying structure. They are flaws in how the artefact is read in the room.
1 — the thresholds drift and nobody audits them. A property writes the bands once, then re-uses them three years later as the building changes shape. The monthly audit is one number: across the prior month, what percentage of dates classified as compression actually ran at 95 percent or above. If the figure sits below 80 percent, the threshold is wrong.
2 — the override log is not written down. A revenue manager moves a date into compression on instinct. The date arrives at 88 percent. No record of why, no learning loop. Three months later the same instinct fires and the same miss happens. The override log forces the instinct into language, and the language is what compounds.
3 — the calendar is treated as a forecast. The desk reads the band and assumes the rooms number will follow. The band is the posture, not the prediction. The posture and the outcome are two different surfaces. The first decides the pricing. The second informs the next monthly audit.
4 — the calendar is built once and never refreshed. A demand calendar without a daily front-of-window refresh and a monthly threshold audit is a single-point estimate dressed up as a grid. The same logic referenced in ADR vs RevPAR vs GOPPAR applies — every artefact in revenue management is a starting point, not a verdict.
The monthly refresh — the 20-minute ritual that keeps the artefact alive
After the first 90-minute build, the demand calendar gets refreshed monthly in about twenty minutes. The discipline matters more than the duration. The five-step monthly refresh:
- Audit the prior month's thresholds. Of dates classified as compression, what percentage ran at 95 percent or above? Of dates classified as low, what percentage ran below 55 percent? If either drift below 80 percent agreement, the thresholds are stale and the boundaries need a small adjustment.
- Read the override log entries. Each override gets a closing row added — what the date actually delivered. The pattern across overrides is what teaches the calendar. If every override has paid off in the same direction, the underlying thresholds are biased and need to move.
- Re-ingest the next 90 days of forward data. One refreshed row per future date, same five inputs.
- Re-classify and flag new boundary dates. Most dates will hold their band. A small number will move — usually the dates that have picked up faster than expected, or the dates a new event diary entry has lifted.
- Note the demand calendar accuracy headline. The percentage of prior-month dates classified into the band they actually delivered, captured at the 14-day-out snapshot. The trajectory matters more than the level. A property moving from 68 to 76 percent over three monthly refreshes is doing the work.
Twenty minutes a month. Twelve refreshes a year. The compound is where the value sits — the override log fattens, the threshold audit tightens, and the calendar starts catching the dates the RMS forecast quietly missed.
A short table to keep the four bands on one page
| Band | Occupancy at 14-day-out | Pricing posture | Restriction posture |
|---|---|---|---|
| Low | Under 55% | BAR at floor, every promo bucket open | No restrictions |
| Normal | 55–80% | BAR at seasonal mid-point | Selective minimum-stay on shoulder dates |
| High | 80–95% | BAR at upper quartile of rate-spread | Discount channels close in stages, minimum-stay considered |
| Compression | 95% and above | BAR at ceiling, no discount | Discount channels closed, two-night minimum on arrival side, wholesale capped |
One page, four rows. Pinned to the working file. Audited monthly. That is the artefact every revenue desk should own.
A real scenario: 120-key urban property, one quarter, the calendar earning its keep
A 120-key urban property I worked with through 2025. The building had a serviceable RMS forecast — trailing-90 MAPE of 4.1 percent — and no demand calendar. The quarterly read opened on RevPAR, ADR, and occupancy. The pricing posture across the quarter was reactive: open every channel until the front-office manager rang the alarm bell, then close the cheap channels and lift the rate two nights out.
We built the demand calendar in 95 minutes the first Tuesday of April. Four bands, five inputs, one override log. The April monthly refresh was 22 minutes. The May refresh was 18 minutes.
The decomposition across the next quarter. Nine dates classified as compression at the 14-day mark — five of which the RMS forecast had under-called and would have run at high-band by default. The override log moved those five into compression two weeks out. The pricing posture on the five: BAR up by an average of A$34, discount channels closed, two-night minimum on the arrival side, wholesale auto-allocation capped.
Closing read on the five overridden dates: four ran at 95 percent or above, one ran at 93 percent. Blended ADR on the five: A$311 versus a baseline trajectory of A$277. Five compression dates at +A$34 ADR on roughly 110 sold rooms each is approximately A$18,700 of incremental rooms revenue across one quarter — net of the lower distribution cost from closing the discount channels. The override log was the artefact that produced the lift. The demand calendar was the surface that held the log.
How the demand calendar ties into the rest of the revenue stack
The demand calendar is one of three forecast-quality artefacts I have on the cover slide of every monthly read. The rooms forecast feeds staffing and F&B. The demand calendar feeds pricing. The override log feeds the next monthly audit. The same discipline is unpacked in the broader posture described in hotel revenue management strategies for 2026 — write the rules down, run the same routine, let the override log compound.
Macro context belongs on the read. The Australian Bureau of Statistics short-term visitor arrivals series and the Tourism Research Australia domestic outlook help separate a property-specific demand pattern from a market-wide one.
FAQ — building and running a hotel demand calendar
What is a hotel demand calendar?
A forward-looking, dated grid that classifies every future trading day into a demand band — low, normal, high, or compression — at a defined lead time. The artefact that connects the forecast inputs to the pricing posture on a per-date basis.
How long does it take to build one from scratch?
About 90 minutes for the first pass on a property with clean snapshot history and a working forecast — 30 minutes for the bands, 30 to ingest forward data, 30 to classify and write the override log. Monthly refreshes settle to about 20 minutes.
What inputs does the demand calendar use?
Five: on-the-books occupancy at the 14-day-out checkpoint, pickup velocity over the trailing 7 days, same-time-last-year pacing for the matching arrival weekday, event proximity, and comp-set BAR direction over the prior 72 hours. The combination matters more than any one of the five.
How many demand bands should the calendar use?
Four. Low (under 55 percent), normal (55 to 80 percent), high (80 to 95 percent), compression (95 percent and above). Three bands compress too much detail. Five bands invite boundary arguments mid-week.
How often should the demand calendar be reviewed?
Daily for the front-of-window (zero to fourteen days out). Weekly for the next 90 days. Monthly for the 90 to 180-day horizon. The daily read catches the moves the weekly read misses.
What is the override log and why does it matter?
The written record of every date the revenue manager moved into a different band than the inputs suggested. One row per override — date, system band, assigned band, trigger signal, closing band. The log is what teaches the calendar over time.
Should every property build its own demand calendar?
Every property that prices nightly should have one. The shape changes by property type — CBD weekday compression, resort weekend bunching, conference property event proximity — but the four-band structure travels.
A note on what this is for
That discipline is what we built RevPerfect for: a demand calendar that ingests the five inputs automatically, classifies every future date into a band at the 14-day-out checkpoint, freezes the snapshot for monthly audit, and surfaces the override log alongside the band trajectory. Paired with the metric described in forecast accuracy vs demand calendar accuracy. Try RevPerfect free → or book a 20-minute walkthrough.