Assessing Assessments: Evidence From the Assessment-to-Sales Ratio

One facet of the debate over the Future Land Use Map is the question of whether Charlottesville is facing an acute housing crisis. This crisis might manifest as high levels of housing price or a high rate of change in housing price. Some participants have pointed to the assessment-to-sales ratio (ASR) of housing as evidence of housing cost pressures. The ASR is the ratio (which is sometimes expressed as the ratio of sales to assessment rather than the reciprocal) between the sale price of a property and its most recent tax-assessment value. The idea is that if houses are selling well above assessed value, it is evidence of housing price pressure. One city council member in particular has taken to citing the high sale price of a few houses in his neighborhood relative to their assessed values as evidence for necessity of moving ahead with FLUM.

We have done more in-depth work on the question of the degree of affordability stress in Charlottesville. If a reader wants a panoramic view of this question, we recommend they refer to that work. However, given the frequent references to ASR in the debate, we wanted to address that narrow issue as well. At the very least, we could offer more systematic data on ASR, so everyone has the opportunity to examine data more extensive and complete than the convenience sample of the 100 yards surrounding a city council member’s house. To our knowledge, the city has not published this data, so it is understandable that people have up to now had mostly anecdote to go on.

Before we get into the data, it is important to note that there are many arguments against taking ASR too seriously. Assessments are often quite inaccurate. Assessors work on incomplete data. Homeowners fight to lower assessments, but fight to maximize sale prices. If a home sale print comes below assessed value, neighbors have a strong incentive to push for their assessments to be lower. No such incentive operates in the opposite direction. Assessments are lagging indicators. In fact, we believe that to the extent assessments are accurate at the time of their calculation, the ASR ratio really just reduces to a measure of year-on-year housing price change, biased upward by a constant factor relating to the general tendency for assessments to run below market prices. Given that there are many existing high-frequency measures of housing price change (repeat-sales indices from FHFA, our own RSI, hedonic indices like Zillow’s ZHVI), it seems less than optimal to rely on a noisy and potentially biased measure. Below, we include a short appendix that gives a more formal argument for the view that ASR tends to approach a measure of housing price change.

But, as we mentioned above, participants in the debate continue to reference ASR. Therefore, we calculated an ASR index for Charlottesville from 1997 to the present. We used the city’s Open Data Portal to get historic assessments and all residential sales. We cleaned the sales data to remove “zero-price” sales, and then for each sale, compared the sale price to the most recent tax assessment. We express our ASR with sales price as the numerator. So an ASR value of one means that the sale price and assessment were identical. Above one means the sale price was above assessment. Below one means the sale price was below assessment. We remove outliers, which we defined as sales with a ASR below .7 or above 2.0. We had records for 22,686 sales, and 17,883 after removal of outliers.

ASR has increased recently, but off a “local” low that seems to coincide with the price drop at the very beginning of the pandemic. At 1.157, the ASR is higher than the sample average of 1.095 but not dramatically so. It is far below the peak experiencecd in the housing “bubble” just before the GFC. Our take in short: a bit higher than normal, but far from extraordinary.

For good measure, we examined the distribution of ASRs in 2021, 2016 (a recent year of below-sample-average ASR), and 2005 (highest year of ASR in the sample) to get a more granular look at the sales premium over assessed value. Below, you can see the frequency distribution of ASR for each of these years superimposed, with the respective medians represented by vertical lines

We see some increase in dispersion in 2021 vs 2016, but it is nothing like the dispersion we see in 2006. This pattern makes sense, inasmuch as a period of very stable prices suggests a more steady equilibrium that also makes relative prices more stable. When prices are moving more, relative prices may still be seeking an equilibrium as well. In terms of quantiles, the 25%-75% quantile range in 2016 was 0.99 – 1.22; in 2021 it was 1.04 – 1.30. These are not huge differences. In 2005, on the other hand it was 1.16 – 1.47!

Above, we noted how we believe that ASR is of limited usefulness if a good housing price index is available. Zillow publishes a ZHVI index (hedonic index) for Charlottesville, with very timely data. As discussed in our earlier work, it tracks our own RSI well, which gives us confidence in its accuracy. We superimposed the year-on-year change in the ZHVI on the ASR.

It is evident that the ASR is quite similar to year-on-year change in housing price as measured by ZHVI, just as we suspected. Therefore, we don’t think that pointing out instances of high ASR really tells anyone very much about the housing situation that isn’t better captured by price indices. There is one exception, however. The standard price indices are not broken down by some characteristics of interest. If ASR is a good proxy for housing price y-o-y change, then it may be useful to calculate ASR for subsets of the Charlottesville market to get a sense of relative performance.

Before we get to that, since we are talking about housing price growth and whether Charlottesville is an exception, we’ll take this opportunity to give an update of the most recent data for Charlottesville and the nation on two of our preferred indices.

As it has since 2010, Charlottesville is experiencing slower price growth than the national average. This is not to deny that an annual price increase of this magnitude (whether of 8% or 14%) is startling and, to an aspiring homebuyer, discouraging. However, it does strongly suggest that it has little to do with Charlottesville specifically. There is no precedent for zoning changes or supply additions in a small jurisdiction proving sufficient to revert, even locally, a powerful national trend.

We now turn to the ASR for various subsets of the Charlottesville residential real estate market. First, we looked at ASR by assessment grade. The tax assessor assigns a “quality grade” to each property to reflect its condition and build quality alongside objective quantitative measures like square footage, bathroom count, etc.

For most of our sample history, A/B graded properties’ ASR and C/D graded properties’ ASR are right on top of each other. Where there is divergence, A/B properties have a higher ASR. We don’t see much relative movement. However in 2021, that has changed. Almost all the increase in ASR comes from C/D graded properties. This is an interesting phenomenon. Why would lower-graded properties be finding bids further above assessment than A/B properties? One answer would be if development potential is getting capitalized into prices. A/B properties are generally too expensive to make sense as tear-down or conversion opportunities. C/D properties, on the other hand, are the natural target of buyers who have development potential in mind. Is this early evidence of the market responding to the FLUM? If so, it echoes the findings of Kuhlmann and Damiano’s papers on Minneapolis and Freemark’s paper on Chicago. Development potential is rapidly capitalized into land and parcels with lower-quality structures. Another way to look at this phenomenon is the ratio of ASR of A/B properties to C/D properties. It has almost always been well above 1, but in 2021 has fallen to .93.

We next turn to ASR by property type. We split the market into single-family housing (detached and attached) and multifamily housing (duplex and above).

The first thing to note is that because the preponderant majority of the city’s multifamily housing (multifamily units actually represent a majority of the city’s housing units) consists of rentals, data on multifamily unit sales are scarcer. The result is a more volatile series. While it is of less statistical reliability than the phenomenon we saw in the analysis by property grade, we do see another case of divergence in 2021. The increase in ASR is driven by single-family properties. What are some possible interpretations? Well, we have noted elsewhere than SFH prices in Charlottesville have outperformed MFH prices pretty consistently over time. MFH absorption has never been as good as SFH absorption — residents have a preference for SFH. But it could also be the same development potential phenomenon showing through. It makes more sense to re-develop a less intensely used parcel. It would stand to reason that the biggest bump to value under the FLUM would be to lower-quality SFH parcels.

Interestingly, it turns out the the state of Virginia publishes an annual report on statewide trends in ASR. The most recent report covers 2019. We thought it would be interesting to look at how Charlottesville compares to other cities. We included all cities that actually did a 2019 assessment (some cities assess less frequently than annually).

Charlottesville’s ASR was among the lowest in the sample.

While it was not the original subject of our analysis, our excursion through the data made us wonder about the question of “assessment regressivity.” Assessments, as we have noted, always tend to run below market prices. However, the degree of misalignment is not consistent across properties. A consistent finding in the academic literature is that higher-priced properties tend to have a larger gap (even in percentage terms) between market prices and assessments. In practice, this creates a regressive bias to property taxes. Lower valued properties tend to pay a higher percentage of true value in property tax than higher valued properties. We did a high-level analysis to check on this phenomenon in Charlottesville, plotting sale price quantiles vs assessed value quantiles. If there were no bias, the dots would lie directly on the diagonal. Where the dots lie northwest of the diagonal, there is evidence of larger gap between sale prices and assessments. And indeed in Charlottesville we see some evidence of regressivity bias as property sale price rises above $500K. Our data set was for all sales later than 2015.

We then drilled down by calculating the Price Related Bias (PRB), a measure of vertical tax equity devised by the International Association of Assessing Officers (IAAO). For the period, the PRB showed a regressive bias but not to a degree that falls outside the range of IAAO best practice guidelines. However, when we looked at properties other than single family detached homes, we did find a PRB that was regressive (value below zero) and outside the IAAO norms. Below, the PRB is plotted by year for all properties and for non-SFH.

Although assessment regressivity does not appear to be a critical problem for the city, it is still worth investigating further. If an declared aim of the Comprehensive Plan is to prevent displacement of lower-income residents, it would make sense to remove all traces of regressivity from the property tax structure of the city.

September 2021

Appendix: ASR and Annual Price Change

The assessment for a given property in given year is based on a vector of property attributes and a vector of coefficients for values ascribed to each of those properties by the market – together these would represent the true market value.  The assessment for a given tax year is performed at the end of the prior year. The assessment involves error.  We decompose that error into a systematic assessment factor that applies to all assessments, capturing the degree to which properties are underassessed.  A factor of 1 would be perfectly accurate assessment, and, for example, a factor of .9 would mean each property is assessed at 90% of its true market value.  We assume no regressivity bias. The second part of the error is property-specific error, which we assume is independent, mean-zero and normally distributed.

And for market price:

The market-value-to-assessed value ratio is:

And annual price change is:

Since u is mean-zero it will wash out over a large i.  The vector of values (B) also changes slowly, so we assume that for the span on one year, it does not change. We can then see that average assessment tends to be equal to lagged price multiplied by the systematic assessment error factor.

Assessment procedures change slowly, so we work under the assumption that the expected systematic assessment error is simply the long-term mean assessment error, which can be taken as a constant factor:

Therefore, sales-to-assessment ratio reduces to:

In effect, the SAR functions as a noisy signal of price growth.  Assuming k <1, it will also be an exaggerated signal of price growth.