RICS AI Standards for Surveyors: Everything You Need to Know

Whether you’re ready or not, artificial intelligence is quickly becoming part of the surveying and property industry. From valuation models to automated defect detection, AI is changing how surveyors deliver services, communicate with clients, and handle data, but with opportunity comes responsibility. That’s why the Royal Institution of Chartered Surveyors (RICS) has stepped in with the first global professional standard for the responsible use of AI in surveying practice.
Set to take effect on 9 March 2026, the new RICS AI standards introduce mandatory requirements for members and regulated firms around the world. They’re designed to give clarity and confidence in an area that’s often clouded by hype and half-truths.
So, what do these standards actually mean for surveyors working across commercial property, infrastructure, land, and construction? Let’s break it down in simple terms.
Contents: What We’ll Cover
- What are the new RICS AI standards?
- Why AI is becoming unavoidable in surveying
- Key requirements surveyors need to know
- Governance, transparency, and professional oversight explained
- Ethical development of AI and what it means in practice
- Common questions from surveyors about AI standards
- How GoReport supports surveyors with digital data capture and compliance
What Are the New RICS AI Standards?
The new global standard is the first of its kind. It lays out both mandatory requirements and recommended ways of working for surveyors who use, or plan to use, artificial intelligence. The rules apply across valuation, construction monitoring, infrastructure projects, and land services.
RICS Acting President Elect, Maureen Ehrenberg, summed it up well: “Artificial intelligence offers real promise to the surveying profession, but only if used responsibly and ethically. This standard ensures surveyors remain at the forefront of innovation while protecting clients, data, and public trust.”
In plain terms, the standard is about keeping surveyors firmly in the driver’s seat. AI can be a co-pilot, but it’s not allowed to fly solo.
Why Is AI Becoming Unavoidable in Surveying?
It’s easy to think of AI as a Silicon Valley buzzword, but look closer and it’s already threaded into everyday practice:
- Automated valuation models (AVMs) are becoming commonplace.
- Drone imagery is being analysed by machine learning models.
- Natural language processing can draft parts of reports.
- Predictive analytics is informing risk assessments.
Ignore AI, and you risk falling behind, but adopt it without rules, and you risk errors, compliance breaches, or reputational damage. That’s where the RICS AI standards come in. They give surveyors a framework to benefit from new tools while avoiding the pitfalls.
For those who want to see where AI is already being used effectively, we’ve written more on AI in surveying.
Key Requirements Surveyors Need to Know
The standard focuses on four main areas. Each comes with expectations that surveyors, firms, and clients should understand.
Governance and Risk Management
- Businesses must create clear policies for how AI is chosen and used.
- Every AI tool should be logged in a risk register.
- Due diligence checks are required before being implemented.
Why it matters: governance prevents “black box” tools slipping into workflows without proper checks.
Professional Judgment and Oversight
- Surveyors remain accountable for all work, even if AI generated the first draft or initial result.
- Professional scepticism should be applied to every AI output.
Why it matters: technology can help, but it’s no substitute for a trained surveyor’s expertise.
Transparency and Client Communication
- Clients must be told, in writing, when AI is used.
- Clients should be offered the option to opt out or request clarification.
Why it matters: informed clients are more confident clients.
Ethical Development of AI
- If a business builds its own AI tools, it must assess:
- Data quality
- Stakeholder involvement
- Environmental impact
- Legal compliance
Why it matters: developing any form of AI must be done with accountability, sustainability, and public trust.
In short, the standards aren’t there to slow surveyors down but to keep the profession trusted, respected, and future-ready.
Governance and Risk Management Explained
Think of governance as your seatbelt – you might never need it in an accident, but you’d be mad not to wear it. The new RICS AI standards make it clear: you can’t just download an app, plug it in, and hope for the best. Every AI tool has to be assessed, documented, and monitored.
That’s where a risk register comes in. It’s not just another spreadsheet to collect dust, but rather evidence that you’ve done your homework. A well-kept register should show:
- What AI tools you’re using (and why).
- The data sources behind them.
- Known risks and how you’re managing them.
- Who is responsible for ongoing oversight?
This matters because AI systems aren’t static; they learn, update, and sometimes behave unpredictably. A model that worked perfectly last year might drift over time if the training data changes. Without governance, you may not even notice until a mistake lands on your desk – or worse, in front of a client.
Business leaders also need policies and training so staff don’t introduce untested tools without approval. With so many surveying tools and apps flooding the market, governance acts as the filter between innovation and chaos. It helps you embrace technology without sacrificing compliance or client trust.
Professional Judgment and Oversight
Here’s the simple truth: AI is clever, but it’s not accountable. That’s your job. If an AI tool suggests a property is worth £1.5 million, it’s still your signature at the bottom of the valuation report and your professionalism standing on the line.
The standard reinforces the need for what many call a “human-in-the-loop” model. That means AI can assist, automate, or highlight issues, but the final decision must rest with a qualified surveyor. In practice, this looks like:
- Reviewing AI outputs for consistency with your own knowledge.
- Questioning anomalies rather than accepting them at face value.
- Documenting where and how you’ve applied professional scepticism.
This doesn’t mean surveyors should avoid AI, it just means they should use it as a second set of eyes. Just as you’d cross-check a junior colleague’s work, AI outputs deserve the same professional scrutiny.
The upside? AI can reduce admin, flag risks, and speed up tasks. The downside? Over-reliance could lead to errors, negligence claims, or disciplinary action. The balance is using AI as a tool, not a crutch.
Transparency and Client Communication
This is the area most likely to change client relationships. Clients must be told formally if AI tools are used. That could be as simple as a note in the engagement letter or a standard line in reports.
Some surveyors may worry this will spook clients, but the opposite may be true. Clear communication builds trust. In fact, framing AI as an additional tool (not a replacement for expertise) often reassures clients.
Ethical Development of AI
Not every business will want to build AI systems in-house, but those that do face stricter requirements. For example:
- Data must be checked for bias.
- Stakeholders (including clients and even the public) should be consulted.
- Environmental impacts must be considered.
- All developments must comply with existing laws.
This section is about future-proofing. With regulators, the press, and the public watching, we can’t afford shortcuts.
The Bigger Picture: AI and Compliance
These standards don’t exist in isolation; they link directly to broader regulatory expectations, from GDPR to professional indemnity insurance. Surveyors already face significant compliance demands, and AI adds another layer.
We’ve previously covered this in Where AI Meets Compliance, showing how technology and professional standards are increasingly intertwined.
For firms juggling multiple demands – resilience, regulation, and results – the new standard ties neatly into the Three R’s of Surveying.
The timing isn’t accidental. By 2026, AI tools will be more widespread, more powerful, and harder to ignore. Surveyors who understand the standards now will have:
- A competitive edge with clients who want reassurance.
- Better protection against professional risk.
- Stronger compliance records for regulators and insurers.
In short, they’ll be more resilient, more trusted, and better placed to grow.
How GoReport Supports Surveyors
At GoReport, we’ve always believed digital data capture should make life easier, not harder. Our platform already helps surveyors cut admin, reduce risk, and deliver reports clients can trust.
With the new RICS AI standards, tools like GoReport become even more valuable. Why? Because they provide:
- Clear audit trails of data sources.
- Transparency in reporting workflows.
- Flexibility to integrate with compliant technologies.
Surveyors who prepare now won’t just tick the compliance box; they’ll be the ones leading conversations with clients, regulators, and peers about how AI can be used responsibly.
If you’re wondering how to prepare your business for these standards, a good starting point is seeing how GoReport can fit into your workflow. You can book a free demo here.
FAQs: RICS AI Standards for Surveyors
When do the new RICS AI standards come into effect?
9 March 2026. You should start preparing policies and procedures well before then.
Do these standards apply only to valuations?
No. They apply across valuation, construction, land, and infrastructure services.
What if my surveying businesses doesn’t currently use AI?
You’re not exempt. Even if you don’t use AI today, the standards require awareness and readiness since many tools you use in the future may include AI.
Do I need to tell clients every time AI is used?
Yes, clients must be informed in writing when AI supports service delivery.
Will AI replace surveyors?
No. AI can support tasks, but professional judgment, accountability, and expertise remain firmly in human hands.
What happens if firms ignore the standards?
As with other RICS standards, non-compliance can affect professional standing, lead to complaints, or even disciplinary action.