Working Google reviews: the loop that lifts
Google reviews don't just arrive — they're worked. Five concrete moves to place this week that turn a passive listing into a system of asking, answering and adjusting.
An indicator with no decision attached to it isn't data. It's noise.
The method isn''t a yearly plan, and it isn''t the Monday-morning huddle either. It''s the step back — at the quarter — that decides which numbers really deserve a stop, and which are just there to reassure.
An indicator is only worth something if it changes what you do. For every number you track, ask: if this line moves 10%, what do I decide? If the answer is ''nothing'' or ''I note it'', that''s not an indicator, it''s decoration. Good indicators are few — typically five to seven — and each is attached to a precise lever: the menu, the team, visibility, margin, loyalty.
Run it backwards. List the three or four structural calls you make every quarter. Trace back to the indicators that should inform them. Anything that doesn''t belong on that path goes.
A single indicator can''t say everything, but they''re not all at the same level. One compass — often paid covers or gross margin — acts as the overall signal. The others are there to explain the compass when it moves: average ticket, menu mix, no-shows, lunch/dinner ratio. The ranking prevents a compass drop from sending you into panic before you''ve read the why — which is already what the diagnostic Read the room frames.
An indicator that decides a menu change gets read by the quarter, not every Monday. An indicator that pilots the team gets read by the week. A no-show or cover indicator gets watched daily to react, but weekly to decide. The rule: observation frequency follows the frequency of the fastest decision it can trigger — not the frequency the data is available. The Make them return method rests on that discipline: track retention over three months, not weekly traffic.
A number only the owner looks at stays the owner''s number. An indicator shared with the chef, the host or the head of room becomes a collective reference point. Not every indicator — gross margin isn''t for everyone — but those that inform an operational call (covers, no-shows, average ticket) need to live in the team. Same logic as the diagnostic Track the team: an isolated indicator doesn''t act, a shared one becomes a lever.
Your indicators aren''t carved in stone. Seasonality shifts, the menu evolves, part of the team comes or goes — what mattered in January isn''t what matters in September. Once a quarter, ask the same question as on day one: if these lines move, what do I decide? Indicators that no longer trigger anything come out. Ones that have become structural come in. The dashboard stays alive — not an inheritance.
Do
Don't
Situation
Two independent bistros, same neighbourhood, same range. The first tracks about thirty indicators in a modern POS — day''s revenue, average ticket by lunch/dinner/weekend, breakdown by category, return rate, satisfaction, productivity per station. The second tracks five, written in a shared sheet with the chef.
Action
The first spends Monday morning reading numbers; nothing is ever really decided, because every line tells a slightly different story. The second runs a one-hour quarterly review on five indicators: paid covers vs. last year, average ticket lunch/dinner, gross margin, no-show rate, loyalty (customers seen in the last 90 days). Each indicator has a decision attached — drop a dish, shift a service, reactivate a list.
Outcome
After a year, the first switches tools for the second time. The second has pulled two low-margin dishes, moved a service from Monday lunch to Thursday night, and launched a recall loop for regulars who were slipping. Not because they had more numbers — because they had fewer, but each one triggered a decision.
A loaded dashboard is reassuring. It gives the feel of piloting — when watching isn''t deciding. A dashboard with no recurring meeting where you actually choose what to do becomes an art object. The method starts when every indicator is read inside a frame — quarterly for structural ones, weekly for operational — that forces a decision, even small, or an explicit non-decision.
It''s the opposite. Past a dozen indicators tracked seriously, attention dilutes, correlations become noise, and you end up reading nothing. Seven is already a lot for an independent. The right reflex is to drop one when you add one — not to stack. What''s true of dishes on a menu is just as true of lines on a dashboard.
Most POS or booking platforms ship with a default dashboard. Useful to start with, dangerous as a piloting frame. The tool pushes what it knows how to compute, not what you need to decide. The right order: set your five to seven indicators, then look for the tool that serves them. The reverse — accepting the default board — produces generic piloting on a job that isn''t generic.
A method is set — still, you need time to put it to work. Readytopost frees that time by taking one front off your plate: your presence on the five social networks. Everything written, illustrated, scheduled — calibrated on your restaurant, week after week. So your energy stays on the trade.
Start with ReadyToPostSee how these principles play out day to day. Practice for restaurants gives you concrete, illustrated, adaptable levers — directly applicable the following week. No quarterly plans, no annual roadmaps: weekly gestures that touch something right away.
See it in practiceGoogle reviews don't just arrive — they're worked. Five concrete moves to place this week that turn a passive listing into a system of asking, answering and adjusting.
A tasting, a partnership with the wine shop next door, a one-off dish on a Thursday night: a short campaign can restart momentum — or devalue the rest of the menu and chip away at the margin without leaving anything behind. Five concrete moves to design it, frame it financially, and track it from Monday to Sunday.
A regular who used to come every week and now shows up every two months won't be won back by a marketing email or a discount. Five moves to place this week — named, written, measurable — to crack the door open without forcing it.
A specific service that's dragging — Tuesday night, Sunday lunch — doesn't need a full overhaul. Five moves placed this week are enough to shift the line the following week, without touching the menu or the prices.
platform-guides
Five platforms publish changelogs that document what each algorithm rewards. Almost nobody reads them. Here's what two years of release notes reveal.
platform-guides
Asphalte invites its audience to co-create the next collection — in public, on the same feed where it posts launches. The mechanism is documented and transposable. Here is how.
case-studies
Better work, fewer clients. Here is the case of an interior designer who solved the wrong problem first — and what she did differently the second time.
social-media-strategy
The jargon circulates. Here is what it means when you are the only person running your brand online.
Five to seven structural indicators is plenty for an independent. One compass — often paid covers or gross margin — and four to six lighting indicators attached to precise levers: average ticket, no-shows, lunch/dinner ratio, loyalty, menu mix. Beyond that, attention dilutes and piloting goes back to passive monitoring. The useful rule: to add one, drop one. That''s what forces ranking instead of stacking.
Two frequencies to keep apart. Operational indicators — covers, no-shows, average ticket — get read weekly, because the decisions they trigger are weekly. Structural ones — margin, loyalty, menu mix — get read quarterly, because you don''t adjust a menu or a pricing policy every week. The method also runs a quarterly review of the list itself: which indicators triggered decisions, which triggered none, which need replacing.
Not as a starting point. Plenty of independents pilot just fine with a shared spreadsheet updated at close. Software becomes useful when the indicators are stable, the team is used to the recurring review, and manual consolidation takes longer than the decision. The classic mistake is buying the tool first and filling its boxes second: you end up with generic piloting, calibrated on what the tool knows how to show, not on what the house needs to decide.
Three reliable signs. One: you check your numbers regularly but none ever triggers a clear decision. Two: at every cover dip, you rediscover your dashboard trying to understand — proof it wasn''t being read in continuity. Three: your structural calls — menu, hours, team — are still made by gut. If two of those three show up, the piloting is nominal, not real. The issue is rarely a lack of numbers — it''s the absence of a frame to turn them into decisions.
The ones where a 10% swing triggers an identifiable decision. Typically: paid covers compared with the same week last year (drives communication and prep), average ticket separated lunch/dinner (drives table suggestion and the menu), no-show rate (drives booking policy), gross margin by category (drives whether to drop or reposition dishes), loyalty — customers seen in the last 90 days — (drives recalls and welcome rituals). The rest — average satisfaction, productivity per station, return rate — is informative, rarely decision-driving. The nuance changes the very nature of the dashboard.