Twenty-first century technologies have catapulted many global companies and their management teams into the center of an alternate universe; a world where IT reigns supreme and acronyms are meaningful. Why? Because, in this world, new methodologies and standards can make the difference between a relatively smooth-running, flexible infrastructure or a maelstrom of competitive and proprietary systems that can only be modified by vendors at an annual cost running in the millions.
Why now? Because across industries, global markets are pushing customer segmentation, competition and pricing wars to the boiling point, straining infrastructures and IT staff to increase systems productivity and agility. What many management teams don't understand is that their current operational infrastructures may have reached critical mass. Most of these so-called heterogeneous environments consist of patchwork layers of new and legacy applications and systems, often hard-wired, point-to-point integrations designed to perform yesterday's tasks with an every-changing array of co-conspirators.
Financial services, healthcare and telecommunications all share this operational challenge. Regulatory and competitive pressures have forced some to re-engineer their IT operations to survive. Others, specifically communications companies, have been dragged into the fray due to a market demand for sales and service of bundled products such as video, voice and data. Manufacturing, however, has focused almost exclusively on powerful design and collaboration tools to stay ahead of escalating product complexity and global competition — further complicating the interoperability challenge. While this approach is certainly understandable, and even good business practice, it may be short sighted.
Concurrent engineering and product data management (PDM) systems have delivered major improvements in information access, collaboration and project management, however, operational silos persist due to the maze of data, systems and processes required to bring complex products to market. These silos present barriers to centralized control and management of human and IT assets and, as a result, to enterprise initiatives critical to improving the organization's ability to respond to economic pressures, market opportunities or infrastructure challenges. So how do you evaluate and "tweak" processes involving multiple departments and suppliers scattered around the globe, managing thousands of parts and functions? Most companies don't even track or catalogue their most critical processes...except perhaps at departmental or key function levels.
Until fairly recently, this level of interoperability was just a pipe dream or, if mandated, required expensive and disruptive integration projects, often solving one problem while proliferating the hard-coding that caused the initial problem. These methodologies have impeded growth and strained relationships between customers and vendors for years, leading to accusations of "vendor lock in" and worse. All that is beginning to change.
Advances in XML standards, application programming interface (API) design, Web services, business process management (BPM) tools and service-oriented architecture (SOA) frameworks are converging to create opportunities for complex organizations to gain control over the processes that determine market success or failure. While process optimization may require focus, resources and commitment, it offers original equipment manufacturers (OEMs) the path to managing their business operations according to their business plan, not the reverse. However, to reach this point requires a top-down and grass roots commitment to thinking in terms of "business flows," not just functions. After all, this is not just about vision, it's about survival.
Process Optimization In Practice