About the
Data
Data
Limitations
This is a brief guide to Data Limitations for
users of this Web site. It discusses some of the
practical reasons why agency databases may have
limitations. It also discusses why information
about the quality of government data is often
limited or absent. Finally, it highlights some of
TRAC's general findings and concerns regarding
statistics about Internal Revenue Service
enforcement activities.
Why Government Databases May Have
Limitations
(and we may not know about them)
Keeping track of the operations of any large
bureaucracy is a challenge. This is especially true
when the bureaucracy -- like the Internal Revenue
Service or the Justice Department - is divided up
into numerous semi-autonomous units whose
activities are not always subject to uniform or
exact definition. It is further complicated when
the activities to be tracked involve a large number
of field offices which each handle diverse problem
areas.
With rare exception, federal law enforcement
agencies have computerized data systems to manage
workload and track activities. These data systems
also usually serve as the main information source
for generating operating statistics for both
internal and external audiences. Within a single
agency, a number of independent databases are
usually maintained, often organized along agency
division lines.
Some agencies (and units within an agency) do a
better job of tracking their important activities
than others. But data quality and coverage cost
money -- in staff time and computing infrastructure
to gather and record the information, to validate
and analyze the information, and to continually
manage the process. Managing the process is
important: to ensure that the information recorded
reflects what the agency is doing and keeps up with
changes in technology, agency reorganizations,
shifting responsibilities, revisions in legal
definitions, and even societal changes which all
impact how information needs to be recorded.
In a climate of tight budgetary resources, it is
not surprising that often resources don't match
what's needed to ensure that an accurate and
complete accounting of agency activities occurs.
Often little attention is given to whether the data
generated from agency computer systems are accurate
-- particularly if inaccuracies don't cause obvious
problems for agency managers. Thus, while the
Internal Revenue Service devotes considerable time
and effort to matching income reports on taxpayer
returns with information reported by third parties
about payments made to taxpayers, much less effort
is devoted to determining whether the revenue IRS
claims to generate through its tax audits
ultimately ever gets assessed and actually
collected. Where no money is being tracked and a
data system is used primarily to count the agency's
"successes" -- in the number of completed cases,
referrals passed on, or criminal convictions --
there may be an active disinterest in checking too
closely so long as the number of recorded successes
keeps rising at the desired pace.
What may be surprising is that often little
attention is given by institutions of oversight
such as Congress and the General Accounting Office
to the accuracy of data systems when "actions"
rather than "money" are being tracked. Usually, the
agency's counts are accepted at face value and used
as the basis for appropriations as well as new
legislative initiatives, with little investigation
or probing of what lies under the hood. As a result
of this combined disinterest on the part of both
the agency and our watchdogs, we often have little
solid knowledge about how accurate and complete (or
inaccurate and misleading) most agency statistics
actually are.
Accuracy and Completeness of IRS
Enforcement Statistics
(what we do know)
Without access to the underlying detailed agency
transactional records, it is often very difficult
to judge the quality of statistics received from
government sources. Discussed elsewhere (see Judging the
Quality of Government Data) are three types of
general indicators TRAC employs in its research.
-
Statistics on Tax
Audits
TRAC knows of no systematic study on the
reliability of IRS counts of audits and their
distribution among districts and taxpayer
classes. But several areas of concern can be
noted.
First, audit counts can be misleading for a
number of reasons. In 1994 IRS changed its
definition of what it counts as an "audit," and
recalculated its audit numbers using this new
definition back to 1988. It now counts all
contacts of taxpayers by correspondence by tax
examiners from its Service Centers as audits.
These are typically generated by computer
matching of returns with third-party
information to spot potential discrepancies. In
examining audit rates, it is thus very
important to recognize that an "audit" means
something very different today then it did a
few years ago.
This has a number of further implications.
Audit rates have become extremely variable from
one year to the next, since it doesn't take
much resources to jack up computer-generated
Service Center audit counts. (See
graph.) Comparing audit coverage across
classes of taxpayers has also become more
problematic. While all may count equally as an
"audit," they may imply very different
activities and levels of thoroughness in the
examination of a taxpayer's books and papers.
While the growth in Service Center audits makes
this problem especially troubling in the audit
of wage-earner returns, the variability in the
thoroughness of audits has always been an
important issue with business and corporate
returns. For example, just because most large
corporate returns are examined, doesn't
necessarily imply that equally thorough audits
take place. The audit of small businesses can
be more thorough because it is so much easier
to examine all relevant transactions for a
small businessman than it is for a
multi-national corporation.
Second, the dollar amounts IRS reports as
resulting from its audits also can be very
misleading. This is because the figures IRS
reports are usually based upon initial audit
recommendations, not final assessments or
amounts ultimately collected. There can be a
large difference between these initial
recommendations and final assessments and
collections. When significant dollar amounts
are at stake, taxpayers have sufficient
financial incentive to contest IRS auditor
recommendations. Such appeals are surprisingly
successful and result in large reductions from
initial audit recommendations to final
assessments. (See
tables.)
While not a problem with the statistics per se,
it is important to note that changes in
under-reporting found during audits don't imply
changes in tax evasion, or even in taxpayer
noncompliance more generally. This is primarily
because these changes are driven by the volume
(and targeting) of IRS enforcement activity --
not underlying changes in taxpayer compliance.
It should also be remembered that because IRS
turns up tax under-reporting, it doesn't
necessarily mean that the taxpayer committed
tax evasion or even in fact misreported his or
her taxes. A very substantial amount of what
IRS records as "noncompliance" occurs because
complexities in the law give rise to more than
one defensible interpretation of the amount of
tax that is truly owed.
-
Statistics on Criminal
Enforcement
Serious problems in the data systems IRS uses
to track criminal enforcement activities are
well documented, and have existed for over a
decade. As a result, the information provided
the public by the Internal Revenue Service
about the agency's criminal enforcement
activities are substantially inaccurate. Taken
as a whole, the data provide a highly
misleading picture of the enforcement efforts
of one of the IRS's most important components,
the Office of Criminal Investigation. See
Misleading and Inaccurate IRS Data and IRS
Criminal Enforcement Data Systems where
these problems are discussed in detail.
Because these data cannot be relied upon to
describe IRS's criminal enforcement efforts, we
have not used them on this Web site. Instead,
information from federal prosecutor
tracking systems along with those of the
Administrative Office of the United States
Courts have been used.