New Articles

5 Promising Ways to Reduce the Impact of MRO on the Supply Chain

MRO

5 Promising Ways to Reduce the Impact of MRO on the Supply Chain

Supply chain managers and procurement specialists often must reduce the effects of maintenance, repair and operations (MRO) expenditures on the supply chain. That’s not always easy, but these five tips should spark meaningful and measurable progress.

1. Understand the Impacts of Poor MRO Management

MRO encompasses essential items that are not part of the finished products — sometimes referred to as indirect costs. For example, the category might include lubricant for a machine, safety goggles for workers and scheduled maintenance appointments for equipment.

MRO expenditures typically account for 5 to 10% of the cost of goods sold. Some people initially view that percentage range as small and do not manage MRO procurement as well as they should or at all. However, that’s a mistake, because running out of critical items or failing to stay on top of maintenance could bring knock-on effects.

For example, if a production line machine runs out of an essential chemical, its output could completely stop until someone re-supplies. Alternatively, running out of safety gear could put lives at risk and expose a company to scrutiny from regulators if accidents happen. Weighing the consequences of inadequate MRO management should provide the encouragement any company needs to take it more seriously.

2. Determine How to Mitigate Climate Change-Related Effects

Many leaders across all industries are paying more attention to how climate change could affect MRO expenditures. For example, some scientists believe climate change makes hurricanes more severe, causing more rainfall than past storms did. In that case, maintaining a building may involve purchasing and installing flood barriers or changing a warehouse layout, so the most valuable items stay out of the reach of rising water.

Imagine an area starts experiencing more severe winter storms. In that case, a company’s MRO budget may include more salt and other de-icing products to keep loading bays and other regularly used areas safe and accessible. Alternatively, business leaders may need to invest in cloud software that lets some people work from home if they can’t reach their workplaces due to icy roads. When companies take preventive measures like these, their overall weather-linked MRO costs should decrease due to better preparedness.

Inclement weather’s effects on the supply chain are not merely hypothetical. A report showed that the 2011 floods in Thailand affected more than 14,500 entities that used Thai suppliers. Those weather events resulted in billions of dollars worth of losses for the companies that had operations disrupted. Thus, inclement weather could raise operational expenses if a business ramps up production to meet the needs of clients affected by production stoppages from other suppliers in hard-hit areas.

3. Create an Effective Preventive Maintenance Program

When maintaining the equipment that helps the supply chain run smoothly, there are two primary approaches to pursue — reactive and preventive care. The first type centers on addressing problems once they appear. Conversely, proactive maintenance is all about having technicians assess machines often enough to catch minor issues before they cause significant outages or require total machine replacement.

One survey showed that 80% of maintenance personnel preferred preventive maintenance. The respondents found it especially valuable as part of a multidimensional maintenance plan. Such an approach lets companies avoid the costliest or most time-consuming repairs. That’s because technicians notice most issues while the abnormalities are still small and simple to address.

Business leaders may not immediately associate some MRO expenditures with preventive maintenance. For example, one professional accepted a position as the maintenance manager of a fully automated warehouse. Soon after assuming the role, he assessed how cleanliness supported preventive maintenance by showing more details about functionality. He gave the example of how it’s more challenging to spot a machine leak when the floor below the equipment is dirty.

4. Set Relevant Key Performance Indicators

Many company leaders — especially those who recognize data’s value — set key performance indicators (KPIs) to track whether improvements on particular metrics occur over time. If they do, that generally means the business is moving toward its goals. On the other hand, if KPIs get worse or stay static despite employees’ best efforts, it’s time to assess what’s going wrong and make the necessary alterations.

Specific KPIs are exceptionally valuable for decreasing MRO’s impact on the supply chain. For example, measuring the percentage of slow-moving inventory and keeping it under 10% is a suggested ideal. Achieving that aim shows company leaders are not making the common mistake of buying a product that falls under their MRO expenditure umbrella, but finding it expires before they can use all or even most of it.

Inventory accuracy is another worthwhile KPI to track. An ideal is 95% or above. Incorrect MRO product counts could prove disastrous — particularly when many purchasing representatives buy PPE to keep supply chain workers safe. Imagine a scenario where a computer system says a company has 1,000 masks, but, due to human error, they only have 10 in stock. That’s an extreme example that illustrates the importance of staying on top of inventory counts.

5. Assess and Tweak the MRO Budget

Some people make the mistake of treating the MRO budget as a static entity. However, doing that could cause them to miss out on money-saving opportunities. For example, using one MRO supplier instead of several can reduce transactional overhead. In addition to saving on shipping, they may also become eligible for volume discounts.

Regularly scrutinizing the MRO budget can also illuminate whether businesses may be reducing costs in the wrong ways. Maybe they switched to cheaper cutting tools to minimize spending. These may have a lower upfront cost, but add more expenses to the overall budget. Perhaps employees complained and said the tools broke often or quickly became dull during typical use. Thus, managers would probably buy more of the items than before while trying to accommodate those shortcomings.

Making MRO Spending Reductions a Priority

These five tips show how businesses can act strategically to limit MRO spending’s adverse effects on the supply chain. Doing so can keep a company within its budget, plus make it more responsive to marketplace changes that may require operational changes to meet demands.

____________________________________________________________________

Emily Newton is an industrial journalist. As Editor-in-Chief of Revolutionized, she regularly covers how technology is changing the industry.

cloud

Is Your Company Secure On The Cloud? 5 Must-Knows To Manage Risks.

Cybersecurity breaches have become all too common, putting public health, individuals’ private information, and companies in jeopardy.

With cloud computing prevalent in business as a way to store and share data, workloads and software, a greater amount of sensitive material is potentially at risk. Therefore, company leaders need to prioritize cloud security and know how to manage the risks, says Tim Mercer (www.timtmercer.com), ForbesBooks author of Bootstrapped Millionaire: Defying the Odds of Business.

“Cloud adoption is a business model that provides convenience, cost savings, and near-permanent uptimes compared to on-premises infrastructure,” Mercer says. “But cyberattacks continue to plague organizations of every size, and moving your IT infrastructure and services to cloud environments requires a different approach to traditional deployments.

“A private cloud keeps all infrastructure and systems under the company’s control, while a public cloud hands over the responsibility to a third-party company. In hybrid deployments, which most organizations adopt, some services are in the public cloud infrastructure while others remain in the company’s data center. Regardless of which cloud deployment you choose, you should know the cloud security basics or consult with cybersecurity experts before migrating to the new environment.”

Mercer offers five points company leaders need to know about cloud security to help manage their risks:

Shared resources for multi-tenancy cloud customers. “Multi-tenancy refers to the shared resources your cloud service provider will allocate to your information,” Mercer says. “The way the cloud and virtualization works is, instead of physical infrastructure dedicated to a single organization or application, virtual servers sit on the same box and share resources between containers.” A container is a standard unit of software that packages code and helps the application run reliably from one computing environment to another. “You should ensure that your cloud service provider secures your containers and prevents other entities from accessing your information,” Mercer says.

Data encryption during transmission and at rest. Accessing data from a remote location requires that a company’s service provider encrypt all the business’ information – whether at rest in the virtual environment or when being transmitted via the internet. “Even when the service provider’s applications access your information,” Mercer says, “it should not be readable by anyone else except your company’s resources. To protect your information, ask your service provider about what encryption they use to secure your data.”

Centralized visibility of your cloud infrastructure. Mercer says it’s not enough to trust service providers; you’ll also want to verify that your data remains secure in their host environments. “Cloud workload protection tools provide centralized visibility of all your information so you can get adequate oversight of the environment,” Mercer says. “Ask your cloud company if they can provide you with security tools such as network traffic analysis and inspection of cloud environments for malicious content.”

An integrated and secure access control model. Access control models remain a major risk in cloud environments. “Your provider should have cloud-based security that includes a management solution to control user roles and maintain access privileges,” Mercer says.

Vendor sprawl management with threat intelligence. “In complex cloud deployments,” Mercer says, “you may end up using different vendors, each with its own cybersecurity framework. Threat intelligence solutions can provide you with clear insight into all your vendors and the latest global threats that could put your business systems at risk. A threat intelligence tool will gather and curate information from a variety of cybersecurity research firms and alert you to any vulnerabilities in your vendor’s system.”

“For any organization that’s considering a complete cloud migration, understanding the entire threat landscape is essential,” Mercer says. “A team of cybersecurity experts can assist with the planning and oversight of your cloud migration to mitigate risks and establish the necessary controls.”

______________________________________________________________

Tim Mercer (www.timtmercer.com) is the founder of IBOXG, a company that provides technology services and solutions to government agencies and Fortune 500 corporations. He also is the ForbesBooks author of Bootstrapped Millionaire: Defying the Odds of Business. Mercer was inspired to pursue a career in IT as a consultant after he became a telecom operator while in the U.S. Army. After growing up in difficult economic circumstances in the rural South, Mercer achieved success as an entrepreneur, then recovered from the financial crisis of 2007-2008 after starting IBOXG. The company has accrued over $60 million in revenues since its inception in 2008.

cloud

5 Strategies to Reduce Cloud Cost

After initial migration to the cloud, companies often discover that their infrastructure costs are surprisingly high. No matter how good the initial planning and cost estimation process was, the final costs almost always come in above expectations.

On-demand provisioning of cloud resources can be used to save money, but initially, it contributes to increased infrastructure usage due to the ease and speed at which the resources can be provisioned. But companies shouldn’t be discouraged by that. And infrastructure teams shouldn’t use it as a reason to tighten security policies or take flexibility back from the engineering teams. There are ways to achieve both high flexibility and low cost but it requires experience, the right tooling, and small changes to the development process and company culture.

In this article, we present five strategies that we use to help companies reduce their cloud costs and effectively plan for cloud migration.

Lightweight CICD

In one of our recent articles we discussed how companies can migrate to microservices but often forget to refactor the release process. The monolithic release process can lead to bloated integration environments. Unfortunately, after being starved for test environments in the data center, teams often overcompensate when migrating to the cloud by provisioning too many environments. The ease with which it can be done in the cloud makes the situation even worse.

Unfortunately, a high number of non-production environments don’t even help with increasing speed to market. Instead, it can lead to a longer and more brittle release process, even if all parts of the process are automated.

If you notice that your non-production infrastructure costs are getting high, you may be able to reduce your total cloud costs by implementing a lightweight continuous delivery process. To implement it, the key changes would include:

-Shifting testing to the level of individual microservices or applications in isolation. If designed right, the majority of defects can be found at the service-level testing. Proper implementation of stubs and test data would ensure high test coverage.

-Reducing the number of integration testing environments, including functional integration, performance integration, user acceptance, and staging.

-Embracing service mesh and smart routing between applications and microservices. The service mesh can allow multiple logical “environments” to safely exist within the perimeter of production environments and allows testing of services in the “dark launch” mode directly in production.

-Onboarding modern continuous delivery tooling such as Harness.io to streamline the CICD pipeline, implement safe dark launches in the production environment, and enable controlled and monitored canary releases.

See our previous article that goes into more detail on the subject.

Application modernization: containers, serverless, and cloud-native stack

The lift and shift strategy of cloud migration is becoming less and less popular but only a few companies choose to do deep application modernization and migrate their workloads to containers or serverless computing. Deploying applications directly on VMs is a viable approach, which can align with immutable infrastructure, infrastructure-as-code, and lightweight CICD requirements. For some applications, including many stateful components, it is the only reliable choice. However, VM-based deployment brings infrastructure overheads.

Resource (memory, CPU) overhead of container clusters may be less for 30% or more due to denser packing, larger machines and asynchronous workload scavenging unused capacity.

Containers improve resource (memory, CPU) utilization for approximately 30% compared to VM-based workloads because of denser packing and larger machines. Asynchronous jobs further improve efficiency by scavenging unused capacity.

The good news is that container platforms have matured significantly over the last few years. Most cloud providers support Kubernetes as a service with Amazon EKS, Google GKE, and Azure AKS. With only rare exceptions of sine packaged legacy applications or non-standard technology stacks, the Kubernetes-based platform can support most application workloads and satisfy enterprise requirements.

Whether to host stateful components such as databases, caches, and message queues in containers is still open for choice but even migrating stateless applications will reduce infrastructure costs. In case stateful components are not hosted in container platforms, cloud services such as Amazon RDS, Amazon DynamoDB, Amazon Kinesis, Google Cloud SQL, Google Spanner, Google Pub/Sub, Azure SQL, Azure CosmosDB, and many others can be used. We have recently published an article comparing a subset of cloud databases and EDWs.

More advanced modernization can include migration to serverless deployments with Amazon Lambdas, Google Cloud Functions, or Azure Functions. Modern cloud container runtimes like Google Cloud Run or AWS Fargate offer a middle ground between opinionated serverless platforms and regular Kubernetes infrastructure. Depending on the use case, they can also contribute to infrastructure cost savings. As an added benefit, usage of cloud services reduces human costs associated with provisioning, configuration, and maintenance.

Reactive and proactive scalability

There are two types of scalability that companies can implement to improve the utilization of cloud resources and reduce cloud costs: reactive auto-scaling and predictive AI-based scaling. Reactive autoscaling is the easiest to implement, but only works for stateless applications that don’t require long start-up and warm-up times. Since it is based on run-time metrics, it doesn’t handle well sudden bursts of traffic. In this case, either too many instances can be provisioned when they are not needed, or new instances can be provisioned too late, and customers will experience degraded performance. Applications that are configured for auto-scaling should be designed and implemented to start and warm up quickly.

Predictive scaling works for all types of applications including databases, other stateful components, and applications that take a long time to boot and warm up. Predictive scaling relies on AI and machine learning that analyzes past traffic, performance, and utilization and provides predictions on the required infrastructure footprint to handle upcoming surges or slow downs in traffic.

In our past implementations, we found that most applications have well-defined daily, weekly, and annual usage patterns. It applies to both customer-facing and internal applications but works best for customer applications due to natural fluctuations in how customers engage with companies. In more advanced cases, internal promotions and sales data can be used to predict future demand and traffic patterns.

A word of caution should be added about scalability, regarding both auto-scaling and predictive scaling. Most cloud providers provide discounts for stable continuous usage of CPU capacity or other cloud resources. If scalability can’t provide better savings than cloud discounts, it doesn’t have to be implemented.

On-demand and low-priority workloads

To take advantage of both dynamic scalability and cloud discounts for continued usage of resources, a company can implement on-demand provisioning of low-priority workloads. Such workloads can include in-depth testing, batch analytics, reporting, etc. For example, even with lightweight CICD, a company would still need to perform service-level testing or integration testing, in test or production environments. The CICD process can be designed in such a way that heavy testing will be aligned with the low production traffic. For customer-facing applications, it would often correspond to the night time. Most cloud providers allow discounts for continued usage even when a VM is taken down and then reprovisioned with a different workload, so a company would not need to sacrifice flexibility in deployments and reusing existing provisioning and deployment automation.

The important aspect of on-demand provisioning of environments is to destroy them as soon as they are not needed. Our experience shows that engineers often forget to shut down environments when they don’t need them. To avoid reliance on people, we implement shutdown either as a part of a continuous delivery pipeline and implement an environment leasing system. In the latter case, each newly created on-demand environment will get a lease and if an owner doesn’t explicitly renew the lease it will get destroyed when the lease expires. Separate monitoring processes and garbage collection of cloud resources are also often needed to ensure that every unused resource will get destroyed.

An additional cost-saving measure that we effectively used in several client implementations is usage of deeply discounted cloud resources that are provided with limited SLA guarantees. Examples of such resources are spot (AWS) or preemptible (GCP) VM instances. They represent unused capacity that are a few times cheaper than regular VM instances. Such instances can be used for build-test automation and various batch jobs that are not sensitive to restarts.

Monitoring 360

The famous maxim that you can’t manage what you can’t measure applies to cloud costs as well. When it comes to monitoring of cloud infrastructure, an obvious choice is to use cloud tools. To make the most out of cost monitoring, cloud resources have to be organized in the right way to be able to measure costs by:

-Department

-Team

-Application or microservice

-Environment

-Change

While the first points might be obvious, the last one might require additional clarification. In modern continuous delivery implementations, nearly every commit to source code repository triggers continuous integration and continuous delivery pipeline, which in turn provisions cloud infrastructure for test environments. This means that every change has an associated infrastructure cost, which should be measured and optimized. We have written more extensively about measuring change-level metrics and KPIs in the Continuous Delivery Blueprint book.

Multiple techniques exist to properly measure cloud infrastructure costs:

-Organizing cloud projects by departments, teams, or applications, and associating the cost and billing of such projects with department or team budgets.

-Tagging cloud resources with department, team, application, environment, or change tags.

-Using tools, including cloud cost analysis and optimization tools, or tools such as Harness.io, which provides continuous efficiency features to measure, report, and optimize infrastructure costs.

With the proper cost monitoring and the right tooling, the company should be able to get a proper understanding of inefficiencies and apply one of the cost optimization techniques we have outlined above.

Conclusion

Cloud migration is a challenging endeavor for any organization. While it’s important to estimate cloud infrastructure costs in advance, the companies shouldn’t be discouraged when they start getting higher invoices than originally expected. The first priority should be to get the applications running and avoid disruption to the business. The company can then use the strategies outlined above to optimize the cloud infrastructure footprint and reduce cloud costs. Grid Dynamics has helped numerous Fortune-1000 companies optimize cloud costs during and after the initial phases of cloud migration. Feel free to reach out to us if you have any questions or if you need help optimizing your cloud infrastructure footprint.