My eye was caught yesterday morning by news that PropTech company, Agent Software, is claiming that ‘some agencies deliberately over-value properties in a bid to win instructions’.
This is, it claims, a ‘response to challenging market conditions, reduced transaction volumes and increased competition’.
This news really didn’t surprise me. It’s something we’ve known for a long time. Overvaluing has been practiced in many dark corners throughout all my years in the industry, most notably in the sales sector where the big motivation is to win the instruction.
From there, relationships can be nurtured to, hopefully, lead to a sale. But first and foremost, you need that instruction!
With the risk of upsetting a few people, I would even argue that it is human nature to overvalue. In one sense, it’s innate, found deep down inside us, that wanting to be optimistic, confident in our abilities as an agent.
On a more self-interested level, it’s easy to understand why some agents will overvalue in order to secure the instruction, especially in the most competitive markets.
This understanding, however, does not negate from the fact that the practice itself is morally ambiguous. The agent’s job is to best serve their client; not sway the client into giving them the instruction based on falsities.
I appreciate that there is no assigned value to property, who’s to say what a house could sell for? So the agent is not doing anything wrong, per se, but they’re not giving the seller a realistic picture of the market.
This begs the question: if overvaluing is a common, long-standing trait of the industry, do we need to find a different way of doing it?
Moreover, do we even need humans involved in the valuation process? One solution might be auto-valuation...
Auto-valuation and the missing 3%
This article, also published yesterday, tells of Urbanzoom, a PropTech startup from Singapore, and its auto-valuation tool which boasts, it claims, ‘a median error of less than 3%’.
That’s a pretty impressive claim and one which, if true, proves there is real potential in auto-valuation innovation. But I still have a few hang-ups about it...
Firstly, the race to build and offer auto-valuation models (AVMs) can only have one winner: the company which creates the most accurate algorithm and delivers the most accurate valuations. Nobody is going to want to go with the company that offers the second most accurate valuations, so there will be no competition.
And then there’s the fact that the route to becoming the industry-dominant AVM is incredibly risky. Zillow, for example, has enjoyed great success with its evolving auto-valuation model, producing some very pleasing results. It has, however, also been sued on multiple occasions for assigning incorrect values.
That’s why it’s important to ask, when Urbanzoom claims to be accurate to 3%, 3% of what, exactly? There is no single truth when it comes to a property price, so what figure is it claiming to be 3% away from?
I can only assume it’s comparing it to the value that a human assigned to the same house.
Bad data = no data
Even if Urbanzoom is as accurate as it claims, one still has to wonder what the breaking point of that accuracy is, and how much responsibility auto-valuation can actually handle.
Take, for example, a series of London streets made up of terrace-upon-terrace of identical Victorian conversions. I can believe that algorithms would have an easy time valuing those houses.
Back in the day, I knew the value of some houses before even visiting them. I knew the street, I knew the layout, I knew it was a two-bed. This is all stuff that auto-valuation can easily learn. But what happens when you’re dealing with more unusual properties? Church conversions, farms, homes with unique architecture, etc. How can an algorithm possibly know the value of a truly one-off property?
It all depends on the data. Bad data in equals bad data out and vice versa. It’s simple. For a valuation, bad data is basically no data. Without data, algorithms are useless. And the data has to be coming in indefinitely, tracking market trends and constantly readjusting to suit.
Generating this data and putting it to use is a long, long process. Along the way, auto-valuation is going to get it wrong far too often.
The lesser of two evils?
We are left with one final question: which is the lesser evil? Relying on humans who are naturally inclined to overvalue, or relying on AVMs which inevitably get it wrong sometimes?
Neither is a good option, so something’s got to change. For humans, that requires us to collectively decide to assign more realistic values to homes, regardless of how that affects our chance of winning instructions.
For auto-valuation, it means being patient and harvesting more and more data, increasingly comprehensive data, until the algorithm has enough to be truly reliable.
So, I guess that means there’s actually a more vital final question: the human or the algorithm, which is most likely to change?
*James Dearsley is a leading PropTech influencer and commentator. You can follow him on Twitter here.