Guest Blogger | Bash “Shellshock” Vulnerability: What You Need to Know Now

The media, IT security industry, and social platforms are all talking about a newly found, highly critical vulnerability named “Shellshock.” With all the hype surrounding these types of security threats, statements about this vulnerability like it “affects 50% of the internet,” “hundreds of millions of computers, servers and devices,” and is “bigger than April’s Heartbleed vulnerability” are being tossed around left and right, and frankly, it’s overwhelming. In an effort to not only promote awareness on this vulnerability but provide valuable and actionable information on the topic, we’ve brought in security expert and information technology professor Michael Ham as a guest blogger to help us truly understand what Shellshock is, what devices are impacted, and how to effectively mitigate risk.


What is Bash?

To first understand what Shellshock is, you have to understand where the vulnerability is rooted. The Shellshock vulnerability impacts a component of many Linux and Mac OS X operating systems known as the “Bash shell.” For more than 25 years, the Bash shell has provided system users and administrators with a command-line interface (CLI) to interact with their Unix-based systems. From a Windows operating system perspective, this would be the equivalent of the command prompt; the Bash CLI allows end-users to make and modify user accounts, permissions, manipulate data, and interact with the operating system in highly privileged ways.


Shellshock Vulnerability: The Basics

The Shellshock vulnerability was first reported as early as September 24 and is catching attention rapidly. The threat ultimately stems from an improper handling of global data within Unix operating systems known as environment variables; these variables are updated in a number of ways and affect how a device’s processes behave. Additional services that interact with environment variables in Unix include the Apache Web Service, OpenSSH, and DHCP, opening the vulnerability to web servers and routers.

Essentially, due to the vulnerability in these environment variables within Bash shell, attackers are able to remotely access a vulnerable system over the network and execute arbitrary command codes. If this sounds familiar it is because Heartbleed worked in a similar fashion; both are remote code execution vulnerabilities, but Shellshock allows an attacker full, outright control of a targeted system. Additionally, security experts across the industry have valued Shellshock as a high severity and low difficulty – meaning if it happens, it’s going to be bad, and it is also relatively simple for these cyber assailants to launch.


Who’s Affected?

Vulnerability disclosers are reporting that the Bash shell is vulnerable from version 1.13 to version 4.3; nearly 22 years’ worth of updates. As you can imagine, given the vast expanse of time that the vulnerability has persisted, a large number of devices are likely vulnerable to a potential attack.

The majority of devices in the spotlight fall under one of the following categories:

  • Apple Mac OS X
  • Many distributions of Linux – excluding Debian and Ubuntu which use the Dash shell
  • Embedded Unix devices, such as wireless routers

It is important to note that Windows users are not affected directly by Shellshock; however, because of the nature of Bash shell, there is always the chance that a program running on a Windows machine could run Bash and, thus, potentially be vulnerable for an attack – the two most notable being Git and Cygwin programs. Check for updates within your programs periodically.

Given the severity of this issue, many users are looking for a simple and sure-fire way of determining if their systems are affected or not.  For those running Mac OS X or Linux systems where you are able to log in and access the terminal, you may run the following command:

env x='() { :;}; echo Vulnerable’ bash -c “echo Check for updates”

If your terminal displays “Vulnerable Check for updates,” that machine is vulnerable to Shellshock.

If you receive an error message similar to this, the device is not affected:

bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x’


What’s the Impact?

The National Vulnerability Database, a service sponsored by the Department of Homeland Security, provides vulnerability scores based on exploitability and potential impact. As previously mentioned, Shellshock has earned a 10 ranking in both categories – the highest level of severity in their rating system.

An attacker may be able to leverage this impactful vulnerability remotely and without any credentials (unauthenticated). In the event of a successful exploitation, an attacker can create denial of service (DoS) conditions on critical services, extract sensitive data, take complete control of affected systems, and redirect network traffic to untrusted or illegitimate locations resulting in further compromise. The combination of ease of exploitation in addition to highly compromising results equates to a perfect storm that adversaries will look to take advantage of and capitalize on if users do not mitigate their risk.


What to Do?

While understanding the root of the vulnerability and the potential impacts are important, many are simply looking for a way to mitigate the associated risks. Unfortunately, being early in the discovery phase of this threat, few strategies exist to easily patch the vulnerability. As a general rule of thumb, ensure that you are running the latest versions of software and programs on your devices to best mitigate your risk. Enabling automatic updates on your devices will relieve you of this manual process, keeping you safe with the latest updates against these types of threats.

The following is a brief roundup of what can be done at this point on any affected devices:

  • Apple Mac OS X – Unfortunately, there is not yet an official patch released by Apple to address the vulnerability. Check your system updates frequently as Apple is expected to address the situation and respond accordingly in the days to come.
    • Sources have issued guides on manually updating and patching the shell in OS X. Be extremely cautious doing so, as this may lead to undesired results.
  • Linux – Patches are out for some of the well-known operating systems, such as Red Hat. Most patches appear to be released through the normal system update process.
  • Embedded Devices – Given the nature of these devices and the vulnerability itself, it is going to be difficult to determine which devices are vulnerable and which are not. Check with your devices’ manufacture for any indications on the existence of a vulnerability. They will be pressured in time to address the vulnerability and respond with appropriate patches as security researchers unveil more information about affected devices.

 

We will bring you more information as it is made known, but in the meantime, if you’re running Linux or OS X, install the newest security updates as they become available to keep yourself protected.

You can read more about this vulnerability from the following trusted sites:

http://mashable.com/2014/09/26/what-is-shellshock/
http://gizmodo.com/why-the-shellshock-bash-bug-could-be-even-worse-than-he-1639047786

 


michael_hamMichael Ham is a professor in the College of Business and
Information Systems at Dakota State University in Madison,
South Dakota where he specializes in information security,
cyber operations, and system administration.
Michael is a contracted, independent tester with Eide Bailly
Technology Consulting where he performs internal vulnerability
evaluations, external penetration testing, and social engineering
assessments.

i-mham@eidebailly.com

 

Safeguarding Trust with IT Security

We’ve all heard about the major data breaches companies both large and small have experienced as of late – from Target to Home Depot, the prevalence and size of breaches are growing. The risk of cybercrime is real and present in today’s ever-technology based life. In fact, according to a new report from the Ponemon Institute, in the last year, 43% of businesses experienced a data breach, up over 10% from the previous year. And while you may assume the compromised data is your biggest concern in the event of a security hack, the true threat is to your organization’s reputation and your ability to maintain the trust of your clients and stakeholders – although, with an average cost of $201 per stolen record in the United States, there’s a substantial financial risk to your business as well.

Trust is the currency of consumers today, and when that trust is broken, it can be extremely challenging to regain. A quality product, service or experience is only one facet of consumer trust; client satisfaction is based on their entire experience with your organization at every touch point, including how your organization safeguards that trust with an exceptional IT security strategy. It is shocking to find that 27% of businesses today do not have an established security strategy despite the steady rise in threats.

Security and privacy begins in the boardroom; it cascades over the C-suite and trickles down through the organization where it ultimately rests upon the shoulders of every single employee within your business. As you evaluate your current IT security strategy, the following are important – and often overlooked – aspects to consider.

Security Begins and Ends with Leadership  |  It is critical that you have your organization’s leadership determining the level of risk you will assume; the technology department should never lead security and privacy efforts.

Be Intentional  |  Many organizations simply put IT security tools in place and then stand back and wait for something bad to happen. Be intentional, proactive and constantly monitoring the effectiveness of your security tools so that you can continually improve process and procedures while staying ahead of risks with the latest tools and technology.

Make it a Regular Discussion  |  Security needs to be a regular aspect of every board strategy and risk assessment meeting. Board members need to be educated on what the risks are and what is being done to mitigate them.

Put the Right Tools and Policies in Place – and Monitor Effectiveness  |  Security measures may work properly in theory but fail if they are not used correctly or are altered. You must consider the human factor involved in safeguarding electronic information, and as an organization, it is important to remember that the right tools and policies are only as effective as the individuals who monitor them.

Solve for Mobile  |  Mobile devices are an integral facet of everyday life for clients and employees alike. They are also an emerging technology platform for hackers, and they pose a significant security risk within your organization. It is important to find a BYOD solution that functions correctly within your space and allows your staff and consumers to interact efficiently. Don’t be afraid of the security aspects of mobile technology; rather, manage those risks appropriately and frequently.

Train, Train, Train  |  Every employee needs to understand the risks and their role in safeguarding the trust of your clients. Regular training on policies, procedures and the human behavioral element of security is imperative, particularly during this period of rapid, evolving technological presence in the marketplace.

Test  |  Regular internal and external security testing of your tools, policies and people is truly the most effective method of assessing hidden areas of high risk within your organization. Whether you conduct these tests internally or hire a white-hat hacker to provide an additional perspective is determined by the ability and bandwidth available within your organization.

IT security is about safeguarding trust and deterring breaches. Organizations who take an intentional, proactive stance have the opportunity to drive trust and lead the industry in setting the standard for exceptional security. Ultimately, high levels of trust result in improved stakeholder satisfaction, employee retention and reduction of organizational risk – all of which are essential in today’s evolving business landscape.


mike_arvidsonMike Arvidson is the Director of Eide Bailly Technology Consulting’s
Infrastructure Services. With more than 20 years of experience in
the IT industry, Mike’s wealth of knowledge includes network
systems implementation, integrated new technologies, and
information security.

marvidson@eidebailly.com

 

Business Analytics & Big Data: Your Golden Opportunity for Success

Big data. You hear the term all the time – in meetings, within your business network, at roundtable discussions and industry conferences – but do you really understand the concept?

Putting it simply (or, rather, as simply as possible), big data essentially refers to a set of traditional – financial records, transaction details, point-of-sale interactions – and digital information – metadata, web behavior, social exchanges – collected both internally and externally that delivers ongoing analytic discovery for your organization. It is the premium, synthetic motor oil to your engine of a business analytics system; without it, you’ll lock up trying to run on dry with flat, lifeless information.

Generating value from all these sources, however, requires powerful processing and discovery capabilities. The market today is responding with more analytics tools and functionalities than ever before, empowering users to leverage all these business facts and figures to generate genuine business insights. It’s a new approach, one that has seen the sudden demand for never-before-needed roles like data scientists and BI analysts, but even as the technology and industry progress, business users by and large aren’t capitalizing on their golden opportunity.

The statistics on leveraging the power of big data and business analytics to make better business decisions are staggering.

EideBailly_Infographic_GoldRush

Whether QlikView’s dynamic, real-time visual data discovery or the newly announced “freemium” cloud-based application Watson Analytics with powerful what-if predictive insights, big data is moving into the mainstream with more innovative, easy-to-use tools. It is in your organization’s best interest to “mine for business gold” with big data analytics and gain a competitive advantage before the rest of your market cashes in on the gold rush.


darwin_braunagelDarwin Braunagel is a technology business advisor with Eide
Bailly Technology Consulting. He has more than 15 years of
experience developing and managing business and
technology strategies, with success in the selection and
implementation of ERP, CRM, Cloud, and mobile
solutions.

dbraunagel@eidebailly.com

 

The ERP Conundrum: Vanilla, Customized or Configured

When implementing a new ERP system or simply upgrading your current one, questions arise as to what approach best meets your organization’s business requirements. While solutions and factors vary greatly from one scenario to the next, we examined common themes in the industry and identified three common, high-level strategies for ERP system implementations: vanilla, customized, and configured.

Defining

So what are these methods, and how do they differ? When I say “vanilla,” I’m simply referring to an out-of-the-box implementation style that has little to no modification; it is the vanilla ice cream of systems with no hot fudge or fancy sprinkles. It’s simple, and often, there’s a whole other side to your sundae that you’re missing out on (but more on that later). Customized, on the other hand, refers to modifying and changing a system’s source code to such a degree that any system upgrades require additional programming and resources from your organization – think building an elaborate addition to your home and attempting to fix the cracked foundation after. And lastly, configured is an implementation style that leverages a system with inherent, built-in flexibility.

The Three Bears

I am sure everyone is familiar with the classic children’s story about an intrusive, young girl with hair of gold that trespasses on the home of three bears and proceeds to eat their dinner, break their chair, and sleep in their beds. Despite her obvious lack of social boundaries, to find an ERP system and implementation strategy that best fits your organization, I urge everyone to harness their inner Goldilocks to find the one that’s “just right.”

Vanilla | The Porridge is Too Cold

Vanilla implementation may be a viable option for small organizations with annual revenue of no more than $5M, requiring only basic accounting functionality; however, for many organizations vanilla ERP implementation requires reworking business processes. An organization may choose vanilla implementation in an attempt to ensure project success and avoid costly and complex upgrades, but this is easy implementation is not a strategic solution and the trade-offs result in inadequate functionality for many mid-market companies. Moreover, reengineering business processes requires change and adaptation by users, which may or may not be forthcoming.

Customized | The Porridge is Too Hot

Previous generations of ERP software were often accompanied by a lot of customization, and as a result, complexity and risk increased for implementation, enhancements and upgrades. Many companies have opted simply not to upgrade their existing, customized ERP system because doing so would require a substantial financial and resource commitment by the organization. In an attempt to mitigate obstacles in regards to successful ERP implementation and management, the current trend for customization is avoidance, which over the long-term, means falling short on functionality.

Keep in mind that for newer generation ERP systems the term “customization” is sometimes used as a catch-all for describing configuration and/or integration capabilities, which we’ll discuss next.

Configured | The Porridge is Just Right

Newer generation ERP systems have addressed the shortcomings of vanilla ERP implementation as well as the headaches of customizing previous generation ERP systems. Advancements targeting these issues have led to powerful configuration and integration capabilities.

Today, powerful modern ERP solutions deliver configuration functionality enabling chameleon-like adaptability that creates industry-specific ERP solutions, addressing everything from business products and processes to compliance regulations. ERP vertical offerings are pre-configured ERP systems that deliver functionality tailored to specific industries based on proven market best practices. Powerful configuration capabilities also extend beyond ERP verticals to deliver strategic implementation with unique functionality to become an organization’s true competitive advantage (for more information on that, read our latest techbITes newsletter). Also, as a general rule of thumb, the system’s source code is not altered, allowing the organization to remain on the most current version of software.

Integration is closely related to the configured style of implementation and is another key feature in finding a system that is just right. Most organizations require their ERP system to integrate – or share processes and data – with other business applications, both internal and external. Integrating with a separate, custom application may just be the solution an organization needs to meet the unique demands of a mission critical business process, and application program interface (APIs), the software gateways permitting integration into third party applications, can be the means to this end. An ERP system with an open, API-centric architecture adds functionality to the system without changing the system’s source code and, therefore, is able to sustain integration capabilities alongside system upgrades.

Experience in the Kitchen

Strategies and solutions aside, when in doubt, find a cook with experience in the kitchen to help your organization cook up an ERP porridge that is just right for you. Software is only one part of success in this equation; choosing the right ERP partner is equally as important. An experienced partner knows the ERP systems available in the market inside-out and has the expertise to optimize any and all of the functionality a particular system has to offer. Moreover, industry-specific experience is a culinary coup to look for in your search for an ERP cook, ensuring that your business’ unique requirements are considered for optimal implementation and configuration of your ERP system to deliver value both now and in the future.


stuart_tholenStuart Tholen is the Director of Eide Bailly Technology Consulting’s
Enterprise Resource Planning services. With more than 30 years of
experience in tax, audit and IT, Stuart has focused on building and
developing a consulting department to customize and integrate
business solutions for the end-user.

stholen@eidebailly.com

 

Creating Company Culture through Technology

Keeping in mind last week’s Labor in the Technology Industry post and the statistical proof of the blossoming potential in our field of work, organization’s may be wondering how they can capitalize on the market strength. Having a successful business and maintaining that success is a direct result of company culture, and with a great company culture comes a high level of employee retention. Ask yourself: “How do we develop the type of company culture that will attract, engage and retain the top industry talent?”

Building and enhancing culture is vital to every business, big or small. It is not a “one size fits all” situation. Depending on how you want to cultivate your company culture, certain technology tools can be utilized, such as:

  • Corporate Intranets provide tools and workspaces for employees to get work done, and provides a location to communicate company achievements and ideology.
  • Enterprise Social Networks allow employees to connect, respond, and collaborate through social structures that connect people and topics.
  • Online Brand Communities connect employees with partners, customers and would-be customers to learn about products, exchange information, and resolve issues.
  • Unified Communication Platforms leverages presence information and integrates communication tools to deliver a common user experience across devices and applications.

These tools facilitate a culture that motivates employees to engage with their coworkers on a daily basis and enjoy it. Most importantly, these technology tools help to foster a culture that results in collaboration, recognition, and trust. These three aspects serve to hold a business together and create a strong company culture.

Collaboration | In order to promote collaboration, employees need to understand that a successful business functions as a team. Every employee must work with their coworkers, in some aspect or another, to get things done. Technology tools such as a company-wide Intranet and CRM systems like Salesforce provide environments for coworkers to share files, have discussions, provide feedback, and exchange updates.

Recognition | When employees collaborate, it builds camaraderie between colleagues and within the company as a whole. This bond creates a culture of respect between employees and, therefore, greater recognition. With tools such as Yammer and SharePoint, employees are not only more aware of what their coworkers are doing but also are more likely to recognize and support their accomplishments. These technologies drive visibility and solidify connections that generate recognition.

Trust | With collaboration and recognition in place, this forms an environment that upholds professional and mutual respect between employees. When information is shared freely and openly on these technology platforms, employees expect a certain standard of trust between one another and the organization.

If technology is the solution to connecting these three key culture aspects, then the ability to leverage these tools will determine whether or not a business can create and foster a great company culture and ultimately become “the place to work.”


sandi_piatzSandi Piatz is the Director of Business Development with
Eide Bailly Technology Consulting. With more than 16
years’ experience in the technology industry, Sandi
specializes in recruitment, management, and development
of relationships, with a focus on understanding
organizations’ business objectives and aligning their
technology initiatives.

spiatz@eidebailly.com

 

Labor in the Technology Industry

As the long holiday weekend approaches, we thought we would take a moment to reflect on the significance of Labor Day and the impact of the information technology industry.

First observed in New York City in the late 1880s and signed into congressional law 120 years ago, Labor Day celebrates the social and economic achievements of American workers to the overall strength and prosperity of our country.

The technology industry has made a substantial impact on the United Sates’ and the global economic health as a whole. Even during the 2009 recession, the industry lost only 1% of its workforce and proceeded to rebound even healthier in 2010 than the years prior according to U.S. Bureau of Labor Statistics. Between the years of 2001 and 2011, technology related employment increased by 18% despite the dot-com crash of 2000 when investors sold off large amounts of overpriced stock. In fact, the IT industry has grown by a staggering 37% since 2003, and the industry output has expanded rapidly, increasing on average 4.6% annually due to the popular implementation of complex corporate networks and computer systems.

Looking forward, projected growth for the IT industry’s output and employment is expected to exceed those of other industries, thanks in large part to new technologies like mobile platforms and cloud computing. It is an industry that is making strides in the right direction, both in technological advances that improve our day-to-day lives and in the creation of new and innovative careers.

IT Labor Statistic

 


sandi_piatzSandi Piatz is the Director of Business Development with
Eide Bailly Technology Consulting. With more than 16
years’ experience in the technology industry, Sandi
specializes in recruitment, management, and development
of relationships, with a focus on understanding
organizations’ business objectives and aligning their
technology initiatives.

spiatz@eidebailly.com

 

Is It Time for IT to Get Lean?

Lean Six Sigma is a managerial concept that focuses on the elimination of waste, reduction of defects, and the promotion of continuous improvement. This methodology is generally associated with manufacturing, but these concepts can be effectively applied to almost any industry. After all, isn’t there a need for industries across the board to become more cost efficient and increase quality? Technology companies and IT departments seem to be forgotten when adopting Lean and Six Sigma principles; even in companies that have successfully deployed the concepts in other areas of their organization, the application support and IT teams were left off the invitation. Why is it that IT is often overlooked when it comes to the deployment of highly effective project management practices?

Lean Six Sigma principles can be directly applied to any IT project, from a software deployment to an infrastructure build; the methodology forces project teams to consider all possible solutions before jumping into an implementation. Significant time is dedicated to defining objectives and measureable goals before a solution is selected in order to ensure that it is directly in line with the organization’s strategic plan, which is where most IT projects are highly lacking. Unfortunately, it is all too common for organizations to skip straight to the solution without spending the necessary time to ensure it is the most optimal solution for their business, either due to an over-promising salesman, familiarity with the product, or the general “it works for them” approach. Every business is unique, and defining requirements should be the foundation for any project implementation. Often, we see IT teams with significant bandwidth issues which only perpetuates this “quick fix” mindset rather than designating the time to find the right one. The more time that is spent defining business requirements ahead of time, the less time your organization’s IT team will need to spend on customizing and working out the kinks in the system later.

While Lean Six Sigma is generally a top-down managerial culture that takes years to deploy successfully within an organization, there are some basic concepts that can be applied effectively to any project. The standardized project phase approach itself is a significant step up on the project management maturity model. Using a phased approach such as the Define, Measure, Analyze, Improve/Implement, and Control, also known as the DMAIC process, keeps a project focused on the best possible scenario for the business. As a result, this process also facilitates communication and creativity among team members and allows an invaluable knowledge share of information that must be considered throughout the project deployment.

The DMAIC process is as follows:

  • Define | Define the overall project details such as the measurable objectives, scope, restraints, timeline, and budget. Make sure to involve all stakeholders in these discussions and create a formal document for all to approve.
  • Measure | Identify data that will be used to measure the success of the project, and create baselines for your objectives. This may be the hardest phase of an IT project, and it is also most commonly overlooked. However, this phase defines what “done” looks like for a project, whether that is the reduction of lag time or full migration to a new platform. Thoroughly defining your final result will help keep the end in mind and minimize scope creep through the creation of measurable goals.
  • Analyze | Root cause analysis is the primary purpose of this phase, to dig into the weeds of the problem and identify the source. Effective solutions are based on the issues, not the symptoms. Identify what the ideal future state would look like and what it would take to get there.
  • Improve and/or Implement | Here, identified solutions are tested for feasibility, often through proof of concept (POC). Because the final solution is select and implemented during this phase, it is very common for this portion of the overall project to be turned into a project in and of itself.
  • Control | Lastly, the implemented solution is monitored using the measurements defined in Phase II to verify and confirm the expected results. If the desired results are not being seen, adjustments are made. Ultimately, view this as the stability phase, making tiny adjustments until everything is in balance. Once the expected results have been shown and are stable, hand-off to the project and/or business owners is then conducted.

The DMAIC process requires considerable discipline to keep the project team from jumping straight to the end. Pressure from both upper management and stakeholders often make it difficult to resist implementing the first solution that arises. However, keeping on track with this methodical management style for an IT project will ensure that your solution will be the most ideal for your business and that it will be implemented correctly with verifiable results.


sabrina_schindlerSabrina Schindler is a consultant with Eide Bailly Technology
Consulting. She is a certified Project Management
Professional (PMP) and has more than 7 years of experience
managing software application implementation and
optimization projects covering scope, timelines, and
resources.

sschindler@eidebailly.com

 

7 Tips for Disaster Recovery

As organizations become increasingly dependent on their IT systems, preparedness for a potential disaster has become a critical component of risk management. A disaster recovery (DR) plan is designed to provide continuity in business services in the event of a disruption, and an effective plan for is strongly impacted by the proper provisioning and preparation of a company’s IT department.

As such, we have identified seven areas to consider when developing a well-rounded disaster recovery solution:

  1. Backups alone are not a disaster recovery plan. Securing backups at an off-site location is only the first step in a DR plan. A true disaster recovery solution involves a recovery environment that will operate in lieu of your company’s production environment if needed. Problems, mistakes, and errors are all par for the course when building and testing a recovery environment; prepare before a disruptive event occurs so that recovery will run smoothly at the time of a disaster.
  2. Prioritize and monetarily quantify your business processes in terms of loss of revenue, productivity, and reputation due to downtime from a disruptive event. For a DR plan to provide business value, the cost should be proportionate to the losses your business would incur. Perform a business impact analysis and develop risk mitigation strategies that match your business needs, financial constraints, technological capabilities, and any industry regulations.
  3. Engage relevant technologies, such as virtualization and cloud-based DR. Virtualization involves separating an operating system from the physical machine, and it can be a great tool to utilize in disaster recovery plans because it eliminates the need to match DR hardware to production hardware. Alternatively, cloud-based technology can be utilized DR strategies through: 1) Production and DR services in the cloud; 2) On premise point-in-time backup to the cloud with restore either on premise or to the cloud; and 3) Replication to cloud virtualization.
  4. Explore co-location data center options. Co-location involves a shared location that provides businesses with facility logistics such as space, power, security, and connectivity to network and telecommunication services. Businesses provide their own hardware and software in these scenarios, which allows for more flexibility than managing hosted DR services but also requires greater management and maintenance from your company’s IT operations. When exploring co-location data centers, choose a facility with a high speed network and redundant backbone.
  5. Leverage a branch office for disaster recovery. Alternatively, geographically disperse businesses can optimize a branch office to provide their facility logistics rather than a co-location data center; however, a branch IT infrastructure network needs to be implemented before an office can be leveraged as a recovery site. Virtualize servers, disk-based storage, and applications to be platform and location independent. Wide area network (WAN) performance is of great importance in these environments, therefore use WAN optimization techniques to increase data-transfer efficiencies across locations.
  6. Test your disaster recovery plan. Developing a disaster recovery plan should always be approached with successful testing as the outcome. Testing requires documented procedures and checklists to execute and verify your IT recovery process, and it follows a general sequence of recovering infrastructure, applications, and business processes in a recovery environment. The recovery environment needs to be a separate network, which is why testing can be challenging and is often overlooked.
  7. Retest your disaster recovery plan at least annually. Retesting and continuous improvement go hand-in-hand with DR plan maintenance to ensure that your company is matching technologies with business needs and implementing the best testing strategies. Retesting allows you to integrate significant changes to business processes or infrastructure for new testing procedures. It is an opportune time to review advancements in server and storage technologies for disaster recovery. Also, pricing for DR technologies can come down over time, making previously cost prohibitive options now more viable for your organization.

Given the tremendous cost of downtime and the business impact resulting from a disaster, it is poor practice to ignore the business need for a well-developed disaster recovery plan. The development and maintenance of a DR plan is complex and requires resources, but with thorough planning, testing, and continuous improvement, companies of any size and revenue can successfully address and meet their disaster recovery needs.


kevin_bingemanKevin Bingeman is a platform support manager with Eide
Bailly Technology Consulting. With over 20 years of
industry experience, Kevin’s expertise encompasses the
planning, budgeting, design, implementation, and
management of new technologies to support business
operations and organizational goals.

kbingeman@eidebailly.com

 

Scalable Cloud ERP in the Oil & Gas Industry

Finding a system that fits your business’ current needs and will allow for future growth can sometimes feel like an urban business myth; the concept of both-now-and-later in itself is an oxymoron, the healthy fried food of software selection. Though it may seem like a tall tale, such solutions are available thanks to the growing selection of cloud and SaaS (Software as a Service) applications on the market today. When leveraged correctly, they adapt – expanding and contracting as needed to follow the elasticity in the marketplace – ultimately driving your business forward through increased efficiencies and on-demand resources. But many organizations struggle to see the potential in these options. The cloud is nothing new and nearly all facets of life are moving toward a mobile platform, but even still, when faced with the opportunity to virtualize, businesses often fail to see the possibilities.

This system is too integral to our business to be cloud-based.
We know what we know, and we don’t want to change and re-train.
What big of a difference can the cloud make anyway?

For one wholesale distributor in the oil industry, it was a game changer.

In just ten years, the business went from 45 staff and a 12 license cloud ERP implementation to more than 100 licenses and 165 full-time employees. Over tripling in sales, they expanded across six states and have seen a 400% increase in growth, all while utilizing the same cloud ERP system – in this instance, NetSuite.

The key? Customization and scalability.

While the company was busy crossing state lines and breaking into new territories, they were able to manage their increasingly disperse company as a single organization, handling resources as a single inventory. The system’s flexibility allowed the wholesale distributor to bypass common infrastructure needs in new markets and simply use a mobile, online portal to access all their key data, from field images to project specs. Their growing, new workforce utilized the system as a training tool, allowing them to connect with more senior technicians from afar to review transactions and customize project estimates, impacting the business’ turnaround and bottom-line. By creatively utilizing a dynamic cloud system within their organization, they were able to leverage their expertise to benefit the entire organization, regardless of location. With their astronomical success and growth, they are now looking at further customization and scalability opportunities through NetSuite, developing an asset management capability to expand beyond transactional data and provide clients with system access to pay bills, view project statuses, and manage requirements.

This single instance is a prime example of the untapped potential in cloud and SaaS systems. A single ERP solution was successfully implemented and integrated over a decade’s time, in a rapidly growing industry, and they aren’t done yet.

Imagine what this technology could do for your business.


d.c._lucasD.C. Lucas is Eide Bailly Technology Consulting’s Business
Development Manager. With almost 20 years of experience,
D.C. aids organizations across multiple industries analyze,
develop, and maintain their current and future business
decisions as it relates to technology.

dlucas@eidebailly.com