Saturday, December 06, 2014

CBO / JCT Scoring Models Flawed : Reform Needed

These two dynamic congressional institutions are instructed to break down new laws and regulations as to what they will cost and their actual effects on the population. What is revealed is that they are not to be trusted to accomplish the tasks that is their mandate, what something will cost the tax payers and what will be the effect of their decisions on the country.

As with most other government institutions, they are flawed and worse they don't care enough to make the changes necessary to improve results. They have to know their system is not working. Is it just easier to do nothing and let others worry about the destruction they leave behind? Or is this just more government bureaucrats failing to deliver? 

The CBO Needs Dynamic, not Static, Scoring Models
Source: Richard Rahn, "Rejecting imaginary budget numbers," Washington Times, December 1, 2014.

December 4, 2014

When Congress passes a spending bill or a new tax measure, it is scored by either the Congressional Budget Office (CBO) or the Joint Committee on Taxation (JCT). Unfortunately, both entities used flawed scoring models that routinely produce inaccurate estimates.

The CBO (whose director is appointed by the Speaker of the House and the Senate's President pro tempore) looks at the cost and economic consequences of federal spending bills, while the JCT (consisting of five Senate Finance Committee members and five House Ways and Means Committee members) looks at tax bills.

The problem with each group's analysis is that both largely use "static" models to evaluate tax and spending policy, as opposed to "dynamic" models. What's the difference? Cato Institute Senior Fellow Richard Rahn explains:
  • Static scoring ignores peoples' behavioral changes in response to policy changes -- for example, it would ignore the likelihood that a large tax increase would case earners to find ways to avoid paying the higher taxes.
  • As a result, static scoring tends to overestimate revenue gains from tax increases and overestimate revenue losses from tax cuts.
  • Conversely, dynamic scoring takes these behavioral responses into account.
For example, Rahn notes that the capital gains tax rate (a tax on the sale of assets, such as stocks) has been changed multiple times, including in 1978, 1981, 1986 and 1996. Remarkably, not only were the JCT's revenue estimates from the tax changes off-base, Rahn says the group estimated revenue gains when there were actually losses, and losses when there were actually gains.

Why? Static scoring ignored behavioral responses, including the fact that a low capital gains rate would encourage more investors to buy and sell assets and would encourage investment, spurring job growth.

Rahn notes that today the JCT does use some level of dynamic scoring when it produces estimates, but it refuses to reveals its assumptions.
 

No comments: