Method · Piloting

Pick your indicators before you pick your tools

An indicator with no decision attached to it isn't data. It's noise.
structural indicators an independent restaurateur can actually pilot without getting lost.
The setup

The method isn''t a yearly plan, and it isn''t the Monday-morning huddle either. It''s the step back — at the quarter — that decides which numbers really deserve a stop, and which are just there to reassure.

Symptoms

You might recognise these signs.

  • You check the dashboard every week, but no line ever triggers a clear decision.
  • You track more than twenty indicators and, at every dip in covers, you rediscover your own software trying to figure out what happened.
  • Your team can''t tell you which numbers really matter this month — everyone has a hunch, no two are the same.
  • You change tools every eighteen months, hoping a new dashboard will surface what the last ones didn''t.
  • Your structural calls — menu, hours, team — are still made by gut, despite all the numbers you collect.
Method

Step by step.

  1. Pick indicators that trigger a decision.

    An indicator is only worth something if it changes what you do. For every number you track, ask: if this line moves 10%, what do I decide? If the answer is ''nothing'' or ''I note it'', that''s not an indicator, it''s decoration. Good indicators are few — typically five to seven — and each is attached to a precise lever: the menu, the team, visibility, margin, loyalty.

    Run it backwards. List the three or four structural calls you make every quarter. Trace back to the indicators that should inform them. Anything that doesn''t belong on that path goes.

  2. Rank them: one compass indicator, several lighting ones.

    A single indicator can''t say everything, but they''re not all at the same level. One compass — often paid covers or gross margin — acts as the overall signal. The others are there to explain the compass when it moves: average ticket, menu mix, no-shows, lunch/dinner ratio. The ranking prevents a compass drop from sending you into panic before you''ve read the why — which is already what the diagnostic Read the room frames.

  3. Match the observation frequency to the decision frequency.

    An indicator that decides a menu change gets read by the quarter, not every Monday. An indicator that pilots the team gets read by the week. A no-show or cover indicator gets watched daily to react, but weekly to decide. The rule: observation frequency follows the frequency of the fastest decision it can trigger — not the frequency the data is available. The Make them return method rests on that discipline: track retention over three months, not weekly traffic.

  4. Share indicators with the people who live them.

    A number only the owner looks at stays the owner''s number. An indicator shared with the chef, the host or the head of room becomes a collective reference point. Not every indicator — gross margin isn''t for everyone — but those that inform an operational call (covers, no-shows, average ticket) need to live in the team. Same logic as the diagnostic Track the team: an isolated indicator doesn''t act, a shared one becomes a lever.

  5. Review the dashboard every quarter.

    Your indicators aren''t carved in stone. Seasonality shifts, the menu evolves, part of the team comes or goes — what mattered in January isn''t what matters in September. Once a quarter, ask the same question as on day one: if these lines move, what do I decide? Indicators that no longer trigger anything come out. Ones that have become structural come in. The dashboard stays alive — not an inheritance.

Do / Don't

Do

  • Cap piloting at five to seven structural indicators — one compass, the rest as lighting.
  • Attach to every indicator the decision it''s meant to trigger, and the frequency you make that call.
  • Review the list quarterly, drop what no longer serves, bring in what has become central.

Don't

  • Pick software first and then try to fill its boxes — the tool ends up dictating the indicators.
  • Stack numbers because they''re available, with no decision attached.
  • Confuse a full dashboard with piloting that holds — a crowded dashboard is a promise, not a decision.
A concrete case

Situation

Two independent bistros, same neighbourhood, same range. The first tracks about thirty indicators in a modern POS — day''s revenue, average ticket by lunch/dinner/weekend, breakdown by category, return rate, satisfaction, productivity per station. The second tracks five, written in a shared sheet with the chef.

Action

The first spends Monday morning reading numbers; nothing is ever really decided, because every line tells a slightly different story. The second runs a one-hour quarterly review on five indicators: paid covers vs. last year, average ticket lunch/dinner, gross margin, no-show rate, loyalty (customers seen in the last 90 days). Each indicator has a decision attached — drop a dish, shift a service, reactivate a list.

Outcome

After a year, the first switches tools for the second time. The second has pulled two low-margin dishes, moved a service from Monday lunch to Thursday night, and launched a recall loop for regulars who were slipping. Not because they had more numbers — because they had fewer, but each one triggered a decision.

Common pitfalls

Where it usually goes wrong.

  • Believing a dashboard is a decision.

    A loaded dashboard is reassuring. It gives the feel of piloting — when watching isn''t deciding. A dashboard with no recurring meeting where you actually choose what to do becomes an art object. The method starts when every indicator is read inside a frame — quarterly for structural ones, weekly for operational — that forces a decision, even small, or an explicit non-decision.

  • Thinking the more you measure, the better you pilot.

    It''s the opposite. Past a dozen indicators tracked seriously, attention dilutes, correlations become noise, and you end up reading nothing. Seven is already a lot for an independent. The right reflex is to drop one when you add one — not to stack. What''s true of dishes on a menu is just as true of lines on a dashboard.

  • Letting the tool dictate the indicators.

    Most POS or booking platforms ship with a default dashboard. Useful to start with, dangerous as a piloting frame. The tool pushes what it knows how to compute, not what you need to decide. The right order: set your five to seven indicators, then look for the tool that serves them. The reverse — accepting the default board — produces generic piloting on a job that isn''t generic.

Takeaway

Your checklist.

  • Can I name my five to seven structural indicators without looking at a screen?
  • For each one, do I know which decision it''s meant to trigger, and at what frequency?
  • Is there a clear compass indicator, distinct from the lighting ones?
  • Are my operational indicators shared with the team that lives them day to day?
  • Do I review the list every quarter, or have I been dragging the same lines for two years?
  • Are there numbers I track ''because they show up'' that have never triggered a decision?
What's next?

Method in hand. Time to put it to work.

A method is set — still, you need time to put it to work. Readytopost frees that time by taking one front off your plate: your presence on the five social networks. Everything written, illustrated, scheduled — calibrated on your restaurant, week after week. So your energy stays on the trade.

Start with ReadyToPost

See how these principles play out day to day. Practice for restaurants gives you concrete, illustrated, adaptable levers — directly applicable the following week. No quarterly plans, no annual roadmaps: weekly gestures that touch something right away.

See it in practice
restaurant

Other guides for restaurants

Working your Google reviews

Working Google reviews: the loop that lifts

Google reviews don't just arrive — they're worked. Five concrete moves to place this week that turn a passive listing into a system of asking, answering and adjusting.

Running a short campaign

Running a short campaign without breaking margin

A tasting, a partnership with the wine shop next door, a one-off dish on a Thursday night: a short campaign can restart momentum — or devalue the rest of the menu and chip away at the margin without leaving anything behind. Five concrete moves to design it, frame it financially, and track it from Monday to Sunday.

Winning back a regular

Win back a regular: 5 concrete moves

A regular who used to come every week and now shows up every two months won't be won back by a marketing email or a discount. Five moves to place this week — named, written, measurable — to crack the door open without forcing it.

Rescue a slow service

Rescue a slow service: 5 concrete moves

A specific service that's dragging — Tuesday night, Sunday lunch — doesn't need a full overhaul. Five moves placed this week are enough to shift the line the following week, without touching the menu or the prices.

Further reading

Related blog articles

  • platform-guides

    What the platform docs say

    Five platforms publish changelogs that document what each algorithm rewards. Almost nobody reads them. Here's what two years of release notes reveal.

  • platform-guides

    What Asphalte never posts

    Asphalte invites its audience to co-create the next collection — in public, on the same feed where it posts launches. The mechanism is documented and transposable. Here is how.

  • case-studies

    The designer whose work deserved an audience

    Better work, fewer clients. Here is the case of an interior designer who solved the wrong problem first — and what she did differently the second time.

  • social-media-strategy

    The social media terms that matter

    The jargon circulates. Here is what it means when you are the only person running your brand online.

Questions

Frequently asked.

  • How many indicators should you track to pilot a restaurant?

    Five to seven structural indicators is plenty for an independent. One compass — often paid covers or gross margin — and four to six lighting indicators attached to precise levers: average ticket, no-shows, lunch/dinner ratio, loyalty, menu mix. Beyond that, attention dilutes and piloting goes back to passive monitoring. The useful rule: to add one, drop one. That''s what forces ranking instead of stacking.

  • How often should you review your dashboard?

    Two frequencies to keep apart. Operational indicators — covers, no-shows, average ticket — get read weekly, because the decisions they trigger are weekly. Structural ones — margin, loyalty, menu mix — get read quarterly, because you don''t adjust a menu or a pricing policy every week. The method also runs a quarterly review of the list itself: which indicators triggered decisions, which triggered none, which need replacing.

  • Does an independent restaurant need a piloting tool?

    Not as a starting point. Plenty of independents pilot just fine with a shared spreadsheet updated at close. Software becomes useful when the indicators are stable, the team is used to the recurring review, and manual consolidation takes longer than the decision. The classic mistake is buying the tool first and filling its boxes second: you end up with generic piloting, calibrated on what the tool knows how to show, not on what the house needs to decide.

  • How do you know you''re still piloting in the weeds despite all your numbers?

    Three reliable signs. One: you check your numbers regularly but none ever triggers a clear decision. Two: at every cover dip, you rediscover your dashboard trying to understand — proof it wasn''t being read in continuity. Three: your structural calls — menu, hours, team — are still made by gut. If two of those three show up, the piloting is nominal, not real. The issue is rarely a lack of numbers — it''s the absence of a frame to turn them into decisions.

  • Which indicators are actually actionable in a restaurant?

    The ones where a 10% swing triggers an identifiable decision. Typically: paid covers compared with the same week last year (drives communication and prep), average ticket separated lunch/dinner (drives table suggestion and the menu), no-show rate (drives booking policy), gross margin by category (drives whether to drop or reposition dishes), loyalty — customers seen in the last 90 days — (drives recalls and welcome rituals). The rest — average satisfaction, productivity per station, return rate — is informative, rarely decision-driving. The nuance changes the very nature of the dashboard.