In August, the Appraisal Institute held its annual meeting in San Diego. This was a great 3-day educational event, with top speakers on cutting-edge valuation topics.
I’m pleased to have been invited to present a tutorial on advanced valuation methods, which complimented a week-long Institute course designed by Dr. Marv Wolverton, as well has his text, An Introduction to Statistics for Appraisers, also published by the Institute.
These advanced valuation methods really aren’t all that advanced—most of what I presented is well established in the academic literature, and much of it has been used for years by tax assessors and automated valuation models (AVMs). Unfortunately, most practicing appraisers lack a working familiarity with such methods, and the Institute is taking the lead in changing that.
My talk concentrated on four methods that all appraisers should recognize:
- Hedonic modeling
- Survey research (and particularly contingent valuation)
- Expert systems
Even if an appraiser never uses one of these methods, he or she will see them in settings such as the courtroom, in eminent domain matters such as complex right-of-way takings, during property tax negotiations, or while valuing property for financial reporting. So, allow me to provide a brief overview.
An application of multiple regression modeling, “hedonic” refers to using the model to determine the contributory value (the “enjoyment”) of certain components of the property. For example, a larger home sells for more than a smaller one. A larger tract of land sells for more than a smaller one. Thus, site size and structure size are two components of value. However, does paint color change the value? How about the number of bathrooms? The former probably does not, but the latter does, albeit typically in a non-linear fashion. The hedonic model not only tells us what the marginal value of each component is (that is, how much each component contributes to the total value), but also which components are statistically valid contributors to price.
Hedonic modeling has a number of potential uses in valuation. First, if we have a large data set of sales, we can use that to model the values of all of the other properties in the area that have not sold. This makes hedonic modeling a great tool for tax assessment, eminent domain takings, mortgage loan review, or other large-scale projects. Second, if the market is working well and is at equilibrium, then the contributory value of a certain neighborhood-wide component can be extracted. For example, what if a particular neighborhood has a great school? How much does the school affect the values of these homes compared to homes in a similar but non-affected neighborhood?
Of course, the market participants have to understand that the school is better, and they have to factor that into their purchase-and-sale decisions. How can the appraiser test to see if that is true? Survey research provides an excellent and well-developed tool for appraisers.
Survey Research – Contingent Valuation
All appraisers use some forms of survey research in their work, even if it’s simply to survey local real estate agents for rental rates, or survey local bankers for lending rates, or survey local tenants for occupancy projections. More formal surveys have a statistical component, and the most common of these is contingent valuation, or “CV” for short.
In the CV survey, the appraiser is curious about the impact of a “non-market good,” such as a park or a preservation ordinance, or even a “bad,” such as a proposed dumpsite or groundwater contamination. Market participants are frequently unaware of these issues, and as such, pricing models, such as hedonic models, cannot correctly measure the impact of the issue.
Survey methodology is well established in the market research literature and well accepted by the courts. Indeed, the Federal Judicial Center, part of the U.S. Department of Justice, devotes an entire chapter in their Reference Manual on Scientific Evidence to the admissibility of survey research.
How do we statistically compile the collected wisdom of published authors on a particular subject? Medical researchers, for example, have long used the concept of a meta-analysis as a way of treating individual published research papers as data points in a larger data analysis. Let’s say there are three studies out there. One says that a particular phenomenon (in our case, let’s say the impact of a park on property values) is 15%. Another study says 20%, and a third says 25%. A simple average of these three is 20%, so an appraiser with the task of determining the value impact of a proposed park would be on safe ground to say 20% (plus or minus 5%).
What if there are 100 studies out there on this topic, and some are large studies but some are small. Some studies are in different parts of the country. Some were done several years ago, but others are more recent. A meta-analysis allows the researcher—in this case, the appraiser—to compile all of these studies into a large-scale multiple regression model, which takes into account the various factors. For example, a park in an inner city may be more valuable to surrounding houses than a park in a rural area. A park right across the street may actually be a detriment (due to lack of privacy and other issues), while a park two blocks away may be a benefit. Thus, studying a broad array of research will allow the appraiser to answer these complex questions with confidence.
Hedonic modeling requires large data sets and “well-behaved” pricing data. However, there is little room for appraiser judgment in the hedonic model, and the model is of little use when the data set is either small or “not well-behaved.” Expert systems combine some of the best of both worlds. These models are not perfect, but they allow the appraiser to improve on the simple sales adjustment grid and gain a modest degree of statistical power, while at the same time using appraiser judgment. Expert systems are an outgrowth of studies in neural networks and data mining; they enable the appraiser to holistically appraise a small set of properties simultaneously in a way that is internally consistent.
Expert systems make use of three important sets of statistical processes:
- Maximum likelihood estimation (MLE) – Most analytical systems, such as regression, start out with an assumption about the underlying statistical processes (i.e., normally distributed) and then fit a model to the assumed distribution of data. MLE makes no such assumptions, but starts out with the data and then asks, “what statistical process best fits the data we have?”
- Bayesian estimation – Real estate appraisal doesn’t operate in a vacuum. Indeed, the appraiser comes to the assignment with a pretty good idea about what the range of the value should be and then hones in on the truth from that starting point. Bayesian estimation is the formal term for using predecessor knowledge to guide the forecasting process.
- Non-parametric methods – Most students of statistics are used to measures like “mean” and “standard deviation,” which are only useful if the underlying data is well-behaved. When data is not well-behaved, then certain non-parametric measures are used, such as “median” and “coefficient of dispersion.” Fortunately enough, a robust set of statistical measures is available to guide this process.
In the upcoming rewrite of the Uniform Standards of Professional Appraisal Practice (USPAP), the Scope of Work rule is proposed for a significant change. Presently, an appraiser is expected to use methods that are similar to what his or her peers in the field would use. This really calls into question the definition of “peers,” since many of these statistical methods, while increasingly used, are nonetheless more commonly found in the academic or scientific literature. In 2014, USPAP is proposed to be rewritten to recognize that appraisers use methods from the academic and peer-reviewed literature, and that these methods are every bit as valid as the more traditional appraisal tools.
– John Kilpatrick