Is It Time for IT to Get Lean?

Lean Six Sigma is a managerial concept that focuses on the elimination of waste, reduction of defects, and the promotion of continuous improvement. This methodology is generally associated with manufacturing, but these concepts can be effectively applied to almost any industry. After all, isn’t there a need for industries across the board to become more cost efficient and increase quality? Technology companies and IT departments seem to be forgotten when adopting Lean and Six Sigma principles; even in companies that have successfully deployed the concepts in other areas of their organization, the application support and IT teams were left off the invitation. Why is it that IT is often overlooked when it comes to the deployment of highly effective project management practices?

Lean Six Sigma principles can be directly applied to any IT project, from a software deployment to an infrastructure build; the methodology forces project teams to consider all possible solutions before jumping into an implementation. Significant time is dedicated to defining objectives and measureable goals before a solution is selected in order to ensure that it is directly in line with the organization’s strategic plan, which is where most IT projects are highly lacking. Unfortunately, it is all too common for organizations to skip straight to the solution without spending the necessary time to ensure it is the most optimal solution for their business, either due to an over-promising salesman, familiarity with the product, or the general “it works for them” approach. Every business is unique, and defining requirements should be the foundation for any project implementation. Often, we see IT teams with significant bandwidth issues which only perpetuates this “quick fix” mindset rather than designating the time to find the right one. The more time that is spent defining business requirements ahead of time, the less time your organization’s IT team will need to spend on customizing and working out the kinks in the system later.

While Lean Six Sigma is generally a top-down managerial culture that takes years to deploy successfully within an organization, there are some basic concepts that can be applied effectively to any project. The standardized project phase approach itself is a significant step up on the project management maturity model. Using a phased approach such as the Define, Measure, Analyze, Improve/Implement, and Control, also known as the DMAIC process, keeps a project focused on the best possible scenario for the business. As a result, this process also facilitates communication and creativity among team members and allows an invaluable knowledge share of information that must be considered throughout the project deployment.

The DMAIC process is as follows:

  • Define | Define the overall project details such as the measurable objectives, scope, restraints, timeline, and budget. Make sure to involve all stakeholders in these discussions and create a formal document for all to approve.
  • Measure | Identify data that will be used to measure the success of the project, and create baselines for your objectives. This may be the hardest phase of an IT project, and it is also most commonly overlooked. However, this phase defines what “done” looks like for a project, whether that is the reduction of lag time or full migration to a new platform. Thoroughly defining your final result will help keep the end in mind and minimize scope creep through the creation of measurable goals.
  • Analyze | Root cause analysis is the primary purpose of this phase, to dig into the weeds of the problem and identify the source. Effective solutions are based on the issues, not the symptoms. Identify what the ideal future state would look like and what it would take to get there.
  • Improve and/or Implement | Here, identified solutions are tested for feasibility, often through proof of concept (POC). Because the final solution is select and implemented during this phase, it is very common for this portion of the overall project to be turned into a project in and of itself.
  • Control | Lastly, the implemented solution is monitored using the measurements defined in Phase II to verify and confirm the expected results. If the desired results are not being seen, adjustments are made. Ultimately, view this as the stability phase, making tiny adjustments until everything is in balance. Once the expected results have been shown and are stable, hand-off to the project and/or business owners is then conducted.

The DMAIC process requires considerable discipline to keep the project team from jumping straight to the end. Pressure from both upper management and stakeholders often make it difficult to resist implementing the first solution that arises. However, keeping on track with this methodical management style for an IT project will ensure that your solution will be the most ideal for your business and that it will be implemented correctly with verifiable results.

sabrina_schindlerSabrina Schindler is a consultant with Eide Bailly Technology
Consulting. She is a certified Project Management
Professional (PMP) and has more than 7 years of experience
managing software application implementation and
optimization projects covering scope, timelines, and


7 Tips for Disaster Recovery

As organizations become increasingly dependent on their IT systems, preparedness for a potential disaster has become a critical component of risk management. A disaster recovery (DR) plan is designed to provide continuity in business services in the event of a disruption, and an effective plan for is strongly impacted by the proper provisioning and preparation of a company’s IT department.

As such, we have identified seven areas to consider when developing a well-rounded disaster recovery solution:

  1. Backups alone are not a disaster recovery plan. Securing backups at an off-site location is only the first step in a DR plan. A true disaster recovery solution involves a recovery environment that will operate in lieu of your company’s production environment if needed. Problems, mistakes, and errors are all par for the course when building and testing a recovery environment; prepare before a disruptive event occurs so that recovery will run smoothly at the time of a disaster.
  2. Prioritize and monetarily quantify your business processes in terms of loss of revenue, productivity, and reputation due to downtime from a disruptive event. For a DR plan to provide business value, the cost should be proportionate to the losses your business would incur. Perform a business impact analysis and develop risk mitigation strategies that match your business needs, financial constraints, technological capabilities, and any industry regulations.
  3. Engage relevant technologies, such as virtualization and cloud-based DR. Virtualization involves separating an operating system from the physical machine, and it can be a great tool to utilize in disaster recovery plans because it eliminates the need to match DR hardware to production hardware. Alternatively, cloud-based technology can be utilized DR strategies through: 1) Production and DR services in the cloud; 2) On premise point-in-time backup to the cloud with restore either on premise or to the cloud; and 3) Replication to cloud virtualization.
  4. Explore co-location data center options. Co-location involves a shared location that provides businesses with facility logistics such as space, power, security, and connectivity to network and telecommunication services. Businesses provide their own hardware and software in these scenarios, which allows for more flexibility than managing hosted DR services but also requires greater management and maintenance from your company’s IT operations. When exploring co-location data centers, choose a facility with a high speed network and redundant backbone.
  5. Leverage a branch office for disaster recovery. Alternatively, geographically disperse businesses can optimize a branch office to provide their facility logistics rather than a co-location data center; however, a branch IT infrastructure network needs to be implemented before an office can be leveraged as a recovery site. Virtualize servers, disk-based storage, and applications to be platform and location independent. Wide area network (WAN) performance is of great importance in these environments, therefore use WAN optimization techniques to increase data-transfer efficiencies across locations.
  6. Test your disaster recovery plan. Developing a disaster recovery plan should always be approached with successful testing as the outcome. Testing requires documented procedures and checklists to execute and verify your IT recovery process, and it follows a general sequence of recovering infrastructure, applications, and business processes in a recovery environment. The recovery environment needs to be a separate network, which is why testing can be challenging and is often overlooked.
  7. Retest your disaster recovery plan at least annually. Retesting and continuous improvement go hand-in-hand with DR plan maintenance to ensure that your company is matching technologies with business needs and implementing the best testing strategies. Retesting allows you to integrate significant changes to business processes or infrastructure for new testing procedures. It is an opportune time to review advancements in server and storage technologies for disaster recovery. Also, pricing for DR technologies can come down over time, making previously cost prohibitive options now more viable for your organization.

Given the tremendous cost of downtime and the business impact resulting from a disaster, it is poor practice to ignore the business need for a well-developed disaster recovery plan. The development and maintenance of a DR plan is complex and requires resources, but with thorough planning, testing, and continuous improvement, companies of any size and revenue can successfully address and meet their disaster recovery needs.

kevin_bingemanKevin Bingeman is a platform support manager with Eide
Bailly Technology Consulting. With over 20 years of
industry experience, Kevin’s expertise encompasses the
planning, budgeting, design, implementation, and
management of new technologies to support business
operations and organizational goals.


Scalable Cloud ERP in the Oil & Gas Industry

Finding a system that fits your business’ current needs and will allow for future growth can sometimes feel like an urban business myth; the concept of both-now-and-later in itself is an oxymoron, the healthy fried food of software selection. Though it may seem like a tall tale, such solutions are available thanks to the growing selection of cloud and SaaS (Software as a Service) applications on the market today. When leveraged correctly, they adapt – expanding and contracting as needed to follow the elasticity in the marketplace – ultimately driving your business forward through increased efficiencies and on-demand resources. But many organizations struggle to see the potential in these options. The cloud is nothing new and nearly all facets of life are moving toward a mobile platform, but even still, when faced with the opportunity to virtualize, businesses often fail to see the possibilities.

This system is too integral to our business to be cloud-based.
We know what we know, and we don’t want to change and re-train.
What big of a difference can the cloud make anyway?

For one wholesale distributor in the oil industry, it was a game changer.

In just ten years, the business went from 45 staff and a 12 license cloud ERP implementation to more than 100 licenses and 165 full-time employees. Over tripling in sales, they expanded across six states and have seen a 400% increase in growth, all while utilizing the same cloud ERP system – in this instance, NetSuite.

The key? Customization and scalability.

While the company was busy crossing state lines and breaking into new territories, they were able to manage their increasingly disperse company as a single organization, handling resources as a single inventory. The system’s flexibility allowed the wholesale distributor to bypass common infrastructure needs in new markets and simply use a mobile, online portal to access all their key data, from field images to project specs. Their growing, new workforce utilized the system as a training tool, allowing them to connect with more senior technicians from afar to review transactions and customize project estimates, impacting the business’ turnaround and bottom-line. By creatively utilizing a dynamic cloud system within their organization, they were able to leverage their expertise to benefit the entire organization, regardless of location. With their astronomical success and growth, they are now looking at further customization and scalability opportunities through NetSuite, developing an asset management capability to expand beyond transactional data and provide clients with system access to pay bills, view project statuses, and manage requirements.

This single instance is a prime example of the untapped potential in cloud and SaaS systems. A single ERP solution was successfully implemented and integrated over a decade’s time, in a rapidly growing industry, and they aren’t done yet.

Imagine what this technology could do for your business.

d.c._lucasD.C. Lucas is Eide Bailly Technology Consulting’s Business
Development Manager. With almost 20 years of experience,
D.C. aids organizations across multiple industries analyze,
develop, and maintain their current and future business
decisions as it relates to technology.


Optimize Your Environment to Support Your ERP Application

Optimizing performance and efficiencies is a key area of concern for any organization, and often, it begins by examining your current infrastructure environment. This is especially true when optimizing your ERP application. There are a number of factors that play into the performance of your ERP system, and they vary greatly based on which application you are running.

To aid you in optimizing your environment to better support your ERP application, I have identified key areas of consideration for Sage 100 Standard, Sage 100 Advanced and Premium, and Sage 500 editions. Additionally, I have compiled some basic troubleshooting and best practice tips for optimizing your ERP applications that can be translated across all ERP systems.

Sage 100 Standard
is a client-based system in which the client accesses a file share on a network server. As such, network and client PC performance are crucial. Optimize your environment by:

  • Following Sage requirements and recommendations
  • Ensuring you are not running the “Home” version of an operating system as they are not meant for enterprise environment and cannot be joined to a domain.
  • Verifying the following:
    • Gigabit network: Is your network bandwidth sufficient for running your ERP processes? Network latency can be a real performance killer.
    • Workstation performance: More memory, faster processor, and disk speed are important factors to consider upgrading over a standard workstation.
    • File server: Disk speed and network performance are key server factors.

Sage 100 Advanced and Premium editions are client server-based with a database server residing either remotely or locally. Some processing with these editions is performed on the server running SQL, making server performance more crucial. Client performance is still important but not as vital as with Sage 100 Standard. Similarly, performance tips are comparable to Sage 500. Optimize your environment by:

  • Verifying the following:
    • Gigabit network: Is your network bandwidth sufficient for running your ERP processes? Network latency can be a real performance killer.
    • Workstation Performance: More memory, faster processor, and disk speed are important factors to consider upgrading over a standard workstation.
    • SQL server: Memory, processor, and disk speed are essential factors for overall performance.

Sage 500 is client server-based with a database server residing remotely. Most of the processing with Sage 500 is performed on the server running SQL. Client performance is important, because SAP® Crystal Report generation can expend a lot of resources.

  • Trusted Hardware: PC’s, servers, and networking hardware from trusted vendors like IBM, HP and Dell can increase performance and reliability.
    • Workstation Performance: More memory, faster processor, and disk speed are important factors to consider upgrading over a standard workstation.
    • It is important to be aware of Windows compatibility when purchasing new hardware.
  • SQL performance is key. As a starting point, use the below specs and modify as your organization’s users and database grow:
    • Plan for Growth: Disk, memory and central processing unit (CPU)
      • 2 to 8 CPU
      • 8 GB+ Memory
        Reference the Sage 500 Compatibility Guide for the most updated information as the spec recommendations change regularly.
        Note: The operating system you are running can limit max memory used.
    • Storage Performance: Sage recommends the use of a storage area network (SAN) with high-speed network through a Fibre Channel or Internet Small Computer System Interface (iSCSI) for storage performance in larger environments. If that is not possible within your current environment, look to local solid-state drive (SSD) arrays to add performance.
      • 8 GB Fibre Channel or 10 GB iSCSI connectivity to your SAN
        • 1 GB iSCSI networking can be a bottle neck for your storage performance if it contains caching, SSD, or multiple SAS drives.
        • Isolate your iSCSI storage networking traffic from your LAN traffic by using separate network interfaces when possible.
        • Don’t assume Fibre Channel is more costly; finding the storage networking solution that is right for your environment will require some research and cost-benefit analysis.
    • Disk speed is important. Do not use SATA disks as 7200 and 10K SATA drives can cause performance issues with Sage 500. Rather, 15K SAS disk drives are a good starting point to build a system that will allow you to increase performance as needed for your organization. You can add additional disks to your array to increase disk performance.
      • Deploy a minimum of 3 disk arrays, configured with RAID 1 or RAID 10 for database support. Note: RAID 5 is not supported and affects Sage 500 performance.
        • Database array
        • Transaction logs
        • Tempdb
    • Your SQL server should have a gigabit network interface or better for LAN connectivity.
    • Depending on the number of users or growth of your environment, it may become necessary to add second SQL server to offload CPU intensive activities, such as reporting.

The Basics

When examining your ERP environment, consider the following steps your 101 course to optimization.

  • Create a performance baseline of your system (client and servers) when it is running well. Don’t wait until you think you have a performance problem to analyze the expected performance level.
    • Gather performance information at both high-usage times and slow times during different times of the day, week and month to accurately depict performance variances depending on the demands of users.
      • Be aware of the timing for network and system processes, including:
        • Backups
        • Antivirus scans
        • Payroll processing
        • Business related process, such as order processing, reporting and shipping
      • Examine the time it takes to run specific reports during high-usage and low-usage times. How long does it take to print a Sage report versus a print job of a similar page count and size?
      • Utilize Windows’ (or your OS’) performance monitoring tool to gather server performance baselines.
      • Know the size of your databases (Mas500, tempdb, master and transaction logs).
  • Verify that your performance level matches or exceeds your baseline after any upgrades, patches and system customizations. This will aid you in pinpointing when / if a performance decrease occurs.
  • Setup SQL and Sage system alerts.
  • Setup server alerts within your manufacture tools to notify you of any hardware issues.
  • Test all customizations and system changes for performance problems before they go into production.

Tips & Best Practices for ERP Performance Optimization

The following are a list of basic best practices for troubleshooting your ERP performance:

  • Antivirus and other applications that run in the background, considered memory residents, can cause performance issues. Shutting these down for performance testing is quick and easy.
    • Isolate your ERP server so that it isn’t running other processes such as Domain Control, DHCP, and DNS.
    • Isolate your ERP SQL server and its databases. Adding databases for your organization’s other applications, such as systems management or helpdesk, could have a negative impact on your ERP environment while also making it much more difficult to troubleshoot.
    • Verify that you have adequate disk space on both your server and any external drives where backups may be stored.
    • Perform scheduled SQL maintenance on your databases, including backups, indexing and database management.
    • Customizations to your ERP system, if not done properly, can cause performance issues. Document changes to all aspects of your environment – client, server and network – to easily address any issues if / when they arise.
    • Running large reports during business hours can cause performance issues. As such, it helps to know which reports require more time to run and to time them appropriately with your ERP environment.
    • Similar to reporting, data exports, if not timed correctly, can place an added load to your SQL server and cause performance issues. Be cognizant of when these are taking place.
    • Streamline your data access to cache only the set of reporting data you need and create a stored procedure, increasing efficiency and performance speeds.

Disaster Recovery

A disaster recovery plan for your ERP system is essential. It is one of those business items that you will only recognize when you do not have one, and trust me when I say that it pays to be prepared. Even a simple disaster recovery plan is better than none at all.

  • Create daily, weekly and monthly backups / restore points. Depending on the amount of data your organization processes, it may also be beneficial to increase the frequency of your transactional backups.
  • Keep your data offsite as well as onsite for accessibility and protection.
  • Test your backups on a monthly basis. Often organizations find that this is an easy task to add to their end-of-month (EOM) processing.
  • Test your disaster plan through a recovery-and-restore of your system to different hardware.

Stay tuned for next month’s blog post covering the seven key areas of disaster recovery plans.

orrian_richOrrian Rich is a senior systems engineer with Eide Bailly
Technology Consulting, Infrastructure Services. With over 25
years of field experience, Orrian’s infrastructure and strategic
planning expertise aids clients in their systems selection and
network/database management.

The Cloud & The Evolving Role of the CFO

In today’s business environment, we are seeing an exciting shift in the role of the CFO. Through the leveraging of cloud solutions, CFOs are showcasing their business savvy to partners by providing more than historical, rear-view information; rather, the cloud is aiding them in delivering forward-looking analyses of organizational growth, opportunities, and vulnerabilities.

With the cloud, financial departments are no longer simply collecting, paying, and reporting as in the past; they are spearheading and undertaking value-added, strategic activities, using the cloud as the groundwork to automate and execute day-to-day business processes. By embracing cloud ERP systems’ ability to provide real-time data, the CFO is able to focus on expediting processes and providing key stakeholders with accurate, timely, and in some cases, live-time business acumen reporting. Further, the flexibility of the cloud opens the door for creative problem solving, providing limitless potential to create custom, role-based dashboards. This specialized visual data can not only reveal new growth potentials within the organization, but it keeps team members focused on the task at hand.

Moreover, the dynamic dashboard data can deliver each member of your organization’s board of directors with unique reporting to track company financials and health. With complete control over the information, individuals can essentially ask and answer their own questions. Instead of assigning the finance team to compile large, intensive reporting documentation to present at a board meeting, board members can focus on the true reason they are there: To develop the organization’s strategic strategy and direction. In turn, by leveraging cloud computing technologies to collect and visually report company information, the finance department has man hours open to focus on process improvement, streamlining workflows, and executing on big-picture business objectives.

With all of the technologies available in the marketplace today, adopting a cloud ERP system to provide your organization with the means to have a daily financial close streamlines and automates otherwise manual, involved business processes. Ultimately, this results in higher visualization into your business and provides improved tracking and decision making abilities to optimize your business spend and drive revenues.

stuart_tholenStuart Tholen is the Director of Eide Bailly Technology Consulting’s
Enterprise Resource Planning services. With more than 30 years of
experience in tax, audit and IT, Stuart has focused on building and
developing a consulting department to customize and integrate
business solutions for the end-user.


Will We See You There?


Sage Summit 2014 is coming up July 28th through the 31st in Las Vegas, and the event line-up looks great! Eide Bailly Technology Consulting’s own Eric Anderson will be leading two sessions this year covering “Sage Intelligence: Run Your Business Better” and “Sage Intelligence: Why It Should Be a Part of Every Consultants’ Toolkit.”

Join us on Wednesday, June 30th for our Eide Bailly Technology Consulting client appreciation social and mixer at Mandalay Bay!


Will you be there? For more information on Sage Summit 2014 or to register, go here.

Podcast | What Can Windows 8.1 Do For Your Business

There has been a lot of hype and confusion surrounding the Windows 8 operating system (OS) since it’s launch in 2012 and the subsequent updates that have followed. As such, our latest podcast covers some of the features and functionality of the new and improved Windows 8.1, as well as it’s key shortcomings for business users.

As a whole, 8.1 is the OS that Windows 8 wanted to be at its launch. The series of sytem updates have streamlined compatability and integrated ease-of-use gestures that have greatly improved the overall user experience; however, users still find the new interface intimidating, and a few poorly designed initial features have left a sour taste in the mouths of many businesses despite its strides in the right direction.


With all of the improvements, could Windows 8.1 be right for your business?


Windows 8.1


mike_arvidsonMike Arvidson is the Director of Eide Bailly Technology Consulting’s
Infrastructure Services. With more than 20 years of experience in
the IT industry, Mike’s wealth of knowledge includes network
systems implementation, integrated new technologies, and
information security.


What Distributors Can Learn From Amazon

When it comes to e-commerce, Amazon has been doing everything right. The online B2C retail giant has been making headlines since announcing their plans for “Prime Air,” a 30-minute delivery-by-drone strategy that they will implement in as soon as four years. But Amazon is also making serious strides in wholesale distribution, setting itself up to be a game changer with sites like, stocking over 750,000 SKUS in business-related products in over 14 category types. It goes without saying that current business-to-business distributors will be soon be staring down a very serious competitor.

Nearly sixty-percent of businesses reportedly purchase products from online retailers today and over half expect to increase that online spend in the future. Additionally, buyers are increasingly demanding a unified shopping experience, whether that’s on a consumer platform or in B2B. Business buyers, just as consumers, expect product availability, convenience and ease-of-use, and top-line customer service.

All this said, there are ways for distributors to learn from Amazon’s successes. The following are 5 key B2C practices that, when implemented in a B2B market, can deliver the buyer-experience to set you apart in the marketplace.

Real-Time Updates
Business 101: Buyers follow the product. Regardless if you’re in B2C or B2B, if an item is unavailable, the buyer will order elsewhere. Product availability, according to a 2013 survey conducted by Acquity Group, is the number one factor buyers use to select a retailer, followed by speed of delivery, breadth of inventory, customer support, and the product price itself. By managing your inventory and providing real-time product availability updates, you are encouraging purchases by notifying customers of limited availability items and eliminating out-of-stock back orders and order cancellation frustrations. Happy customers mean return customers.

Tools such as customer support forums, order tracking, and quality product images and information provide buyers with the information they need to purchase a product. Implement self-service options on your e-commerce site for customers to troubleshoot or research products or company policies.

Purchase history with click-to-order capability saves B2B buyers time as most businesses tend to order the same products cyclically.

Consumer facing e-commerce retailers are great at promoting their product inventory. Whether it is cross-selling related SKUs or up-selling, B2B retailers can implement this by offering buyers with product recommendations, providing customers with alternatives and ultimately increasing your bottom-line.

Often, business-to-business sites offer limited search capabilities, making it difficult for a buyer to find what they need without a product number or specific product name. Integrated on any successful B2C e-commerce site, multi-faceted search functionality streamlines this process, making it easy for shoppers to filter results by category, attributes, and keywords.
As Amazon and other big players, like Google, continue to move into B2B markets, distributors must learn to adapt quickly and use systems that offer these consumer-focused capabilities if they expect to stay competitive in this increasingly modern e-commerce environment. In the end, the B2B retailers that make purchasing the easiest will win the market.


Stay tuned for our upcoming webinar centered on wholesale distribution.

trina_michelsTrina Michels is a business applications manager with Eide
Bailly Technology Consulting. Analytical by nature, Trina
aims to streamline operations that are often overlooked
by integrating and implementing end-to-end solutions for
her clients that support their unique business objectives,
leveraging technology to maximize goals.


Self-Service Visual Data Discovery in Action

Traditionally, business intelligence (BI) has been centrally managed by IT managers that then distribute predefined reports and dashboards across various departments and management levels, making it a top-down business model. By comparison, self-service visual data discovery becomes a bottom-up BI model that empowers end-users to find the answers they need when they need it by selecting the data that they deem relevant, presenting it with the best visualization method, and interacting with it for insight into smarter business decisions.

Let’s take a look at two examples that demonstrate how users can interact with data and to discover better business insight.

Budgeting Blunders

A CEO views an executive KPI dashboard (Figure 1) and sees expenses in the red, indicating that the organization’s spend exceeded the budget.

Figure 1: Executive KPI Dashboard
Data Visualization_Figure 1

The CEO then navigates to the expenses dashboard (Figure 2) and selects the current YTD. January and March show expenses exceeding the budget, so the CEO lassoes January and March in the bar graph.

Figure 2: Expense Dashboard
Data Visualization_Figure 2

The CEO follows the data (Figure 3) to discover that the new “Corporate Likeability” campaign was not budgeted for. The CEO then selects “Corporate Likeability” to filter the current YTD and discovers that the majority of “Corporate Likeability” expenses occurred in January. The CEO is able to share the dashboards with the PR team across multiple locations.

Figure 3: Expense Dashboard Filtered for January and March
Data Visualization_Figure 3.1 Data Visualization_Figure 3.2

In this example, the CEO was not initially anticipating the need to dive deeper into the expense data, but the capability to leverage the system on an individual level gave the CEO the ability to quickly segment and analyze the necessary data when and how it was needed.

This real-time aggregation of data is made possible by the processing power of multithreading systems, all working together to filter the data and produce an immediate visual feedback. Web and mobile technologies allows users to share and interact simulaneously with the organization to resolve the issues, in this case the budgeting error for the organization’s new “Corporate Likeability” campaign. This is how self-service data discovery ignites interactive business communication.

Premium Play

A manager at an auto insurance company is analyzing loss ratios across Boston districts. Loss ratios less than 100% are considered profitable whereas loss ratios greater than 100% are not. Using the interactive map (Figure 4), the manager digs into a yellow location which is borderline profitable, between 90% and 100%.

Figure 4: Boston Loss Ratios Interactive Map
Data Visualization_Figure 4

The manager navigates to the “What-If” bar chart (Figure 5) displaying the loss ratios within that location by age groups with the premiums received (red dot) and claims paid (yellow dot) for each respective group.

Figure 5: What-If Bar Chart for Zip Code 02130
Data Visualization_Figure 5

The manager uses the sliders to vary the premiums within the entire zip code, focusing on the three age groups above 100%. The manager adjusts claims by 5% for inflation and then varies premiums by 12% to drop the loss ratio to 87.9%.

Figure 6: Adjusted What-If Bar Chart
Data Visualization_Figure 6

The manager dives deeper into the 16 to 25 age group, maintaining the inflation increase but varying the premiums by 20% to drop the Loss Ratio from 139% to 129.9%.

Figure 7: Adjusted What-If Bar Chart for the 16 to 25 Age Group
Data Visualization_Figure 7

In this example, the manager utilized interactive, geographically-based data to identify where insurance premium adjustments were needed. Color-coded, quantitative displays based on location-specific data were layered over a map and the interactive technology allowed the manager to click and follow areas of interest. Adjusting the what-if sliders allowed the manager to forecast a desired loss ratio based on claims, premiums, and inflation. However, this data is not stored; the sliders provide input for real-time calculations and the chart immediately reacts to the changing values with visual displays.


As you can see with both of these examples, the ability for end-users to discover and follow data down unforeseen paths was key to finding answers and solving problems within the business. Self-service visual data discovery provides innate do-it-yourself capabilities, and interactive visual delivery allows business users to quickly respond to constantly changing business environments.



brian_groteBrian Grote is the Director of Eide Bailly Technology Consulting’s
Business Analytics. Brian delivers a unique skill set, combining
years in senior executive management with ERP and CRM systems
expertise. His experience includes business intelligence software,
accounting principles, systems implementation, and development.


IT Project Planning: Cooking Up Success

Part 2 | Execute On Your Recipe

The actual execution of an IT project can be tricky.

Now that’s an understatement.

While thorough defining and planning can minimize a project’s risk and optimize your odds of a successful deployment (for a refresher on that, read May’s blog post), no amount of preparation can guarantee a smooth and seamless execution. There will always be those unexpected obstacles that pop up when you’re not looking. For projects such as process optimizations, accommodating these unanticipated events may impact the timeline or budget, but they will likely have a minimal impact to the overall project. However, when it comes to software deployments, things get a little more complicated. There are so many moving parts in IT projects, from the infrastructure set-up to the software build and process design, one unexpected issue can have a terrible domino effect on the entire project.

An IT project manager is essentially an event planner, the orchestrator behind the complex, highly visible events. While the definition and planning phases helped us develop our menu and collect our necessary ingredients, the execution of an IT project is essentially the day of the party. Everything needs to come together in just the right way to make the event a success. In IT projects, this would be the considered the build phase.

Cook Your Meal

The execution/build phase is generally when project managers start holding their breath. This is where all your hard work in the planning process is tested. This is also the phase where unanticipated issues have the highest level of risk. Once the meal prep begins, a missing ingredient or an overlooked cooking requirement could throw the whole timeline or menu out of whack. There are, however, some basic approaches that can keep your project on track and help you handle the unexpected with ease.

  • Communicate Set up a series of status meetings, no less than weekly, during the build phase. Daily scrums or stand-ups are highly effective communication techniques that allow high visibility into the project progression as well as allow timely identification of potential issues.
  • Manage the Plan Track progress and timelines to ensure key deliverables are accomplished on or before the established target dates.
  • Don’t Wait Until It’s Late Be proactive in your follow-ups; don’t wait until a deadline to inquire on a task status. The more proactive you are in your follow-ups, the sooner deliverables will be accomplished.
  • Benchmark A common sports strategy, creating short-term goals and measuring them against your “competition” will lead to a successful long-term outcome. Benchmarking helps define your project’s critical path, allowing you to identify risks to the timeline earlier.
  • Quality Assurance Don’t wait until a module is fully built before initializing testing. Maximize your resources and productivity by performing quality checks and testing incrementally throughout the build phase.

If Nothing Else, Communicate

The key to successful project execution is communication, and the most important role of a project manager is to facilitate that communication. Capturing the necessary information and disseminating it to the appropriate audience should ideally comprise of roughly 90% of a project manager’s effort. A break-down in communication at any level of the project, be it project sponsor or team member, can have disastrous effects to a project. All stakeholders should be informed of the overall progress of the projects from a benchmarking angle, and all project team members need to be aware of the status of the project at the task/objective vantage. There is no such thing as too much communication, as long as it is clear, consistent, and accessible to everyone who needs it. When everyone on your project team knows what they need to, project success is more easily achieved.

sabrina_schindlerSabrina Schindler is a consultant with Eide Bailly Technology
Consulting. She is a certified Project Management
Professional (PMP) and has more than 7 years of experience
managing software application implementation and
optimization projects covering scope, timelines, and