Our last blog post, Batch Is Better Than Real-Time, kicked off our blog series, 10 Best Practices for Procurement-side Tax Automation. This week its about becoming a student of your data.
In this day and age, it’s extremely important to become a student of your data. Below are three key steps that every good tax practitioner should utilize to save their company money and position them as an invaluable asset within their organization.
Step 1: Know Your Data
Gone are the days of hoping your data will suffice under audit. Hope is not a strategy, but education is. A great way to educate yourself is by identifying, contacting, and meeting with those people in your organization who live and breathe the data on a daily basis.
Valuable information can be gained from any of your organization’s functional groups, but Accounts Payable, Procurement, Accounting, Operations, and IT should definitely top your list. Try to schedule an hour meeting with a key member of each these groups, and you’ll be amazed by how much useful information you’ll learn about your data. While meeting with them, don’t be afraid to get specific, as the devil usually lives in the details.
Below is a list of a few questions you can ask to get the ball rolling.
The above is not meant to be an exhaustive list. As noted, these are just a few questions that should generate some good information to begin the learning process.
Step 2: Capture Your Data
Once down the path of learning more about your data, your next step is properly capturing it. Here you should be thinking about two separate things. First, does the organization have all of the information needed to successfully defend tax positions under audit (e.g., segregated sales tax, line detail, etc.)? If not, it’s good to perform a high-level analysis to determine the potential financial impact the missing data may have on the organization. If your analysis demonstrates a clear business case, you should jointly meet with Finance and IT to present your findings and push for the data change.
Next, you need to consider how the data is going to be pulled. Are you going to be self-reliant, or do you plan to leverage IT to push the data on demand? If you plan to leverage IT, it’s always good to mock up some sample reports to better convey the exact deliverable(s) you need from them. Make sure your samples includes all pertinent fields, examples of how certain information may need to be rolled up, and descriptions for any codes that may be included in the report (such as GL account descriptions, cost center descriptions, commodity code descriptions, etc.).
If you’re going to be self-reliant on capturing the data, you need to determine if you have the proper tools that will allow you to be successful. For example, do you have access to the necessary queries, databases, and/or tables needed to extract the data out of the system? If so, do you also have access to the proper tools to potentially manipulate the data upon request? Even though MS Excel has done a great job overcoming their historical volume limitations, you might find that it’s still not ideal for manipulating the larger data sets that are usually required during audits. Most self-reliant tax practitioners tend to leverage more robust data analysis programs for this type of work, such as MS Access, ACT, or ACL.
Step 3: Review Your Data
Once you have the ability to pull the data, it’s vital to review it on a regular basis. The frequency of your reviews will obviously vary based upon the size, complexity, and the overall needs of the organization, but you should plan to review it at least once a quarter.
In reviewing your data, you should be looking for data anomalies, coding errors, omissions, and any other trends that could cause the organization to quickly fall out of good compliance with a taxing jurisdiction. The purpose of the review is not only to stay abreast of how the data will change over time and by user, but also to promote internal training and/or external vendor awareness before any errors get out of hand.
It always pays to become a student of your data. The more you can learn about how it originated, how to pull it, and how it changes, the better.