Tuesday, December 8, 2009

Progress Software Nabs Mindreef

As to be better positioned to deliver testing and governance products that are geared towards setting up continuous testing and validation to ensure the high reliability and quality of multi-tier, composite SOA applications, Progress Software Corporation recently acquired Mindreef. It is interesting to note the quietness of the event that was reported only briefly by ZDNet bloggers Joe Kendrick and Dana Gardner.

Mindreef was a privately held firm founded in 2002 by Frank Grossman and Jim Moskun who leveraged their deep expertise in Microsoft Windows, Java, and device drivers’ debugging and testing to create the Mindreef SOAPscope products for SOA testing and validation. Mindreef was acquired by Progress Software and included in the Progress Actional product group in June 2008.

Prior to being acquired by Progress Software in early 2006, Actional Corporation was an independent leading provider of Web services management (WSM) software for visibility and run-time governance of distributed IT systems in a SOA. Actional’s SOA management products were incorporated under the product name Progress Actional within Progress’ Enterprise Infrastructure Division, and is now a major element of the Progress SOA Portfolio.

In a nutshell, Mindreef has already been wrapped into Progress Actional product group, since it addresses SOA management at the design and testing phase, while Actional primarily addresses SOA management at the production (run-time) phase (e.g., tracing transactional tables). Thus, Progress now has an expanded solution that addresses the quality and management of the full SOA lifecycle, from early concept and design thru go-live implementation, on-boarding new Web services, and overall SOA production management.

Frank Grossman, former chief executive officer (CEO) and founder of Mindreef is now vice president (VP) of Technology for Progress Actional, reporting to Dan Foody, who is in charge of Progress Actional. For more information the acquisition’s rationale, see the frequently asked questions (FAQ) page here.

Since there is so much product integration in the planning stages at this point soon after announcement of the two recent acquisitions (the other one being of Iona Technologies), Progress hopes to have new slide decks to accompany analyst briefings on virtually all of its products over the next several months. Look for follow up blog posts from me at that time.

Zooming Into SOAPscope

Designed for easy use by architects, service and support personnel as well as SOA operations managers, the Mindreef SOAPscope product family comprises SOAPscope Server, SOAPscope Architect, SOAPscope Tester, and SOAPscope Developer.

Essentially, Mindreef products collect information about Simple Object Access Protocol (SOAP) transactions and use it to shed light on Web services communications. But while most of such logging tools store data in pesky flat files, SOAPscope stores it in a relational database for ease of use even by the folks who are not necessarily XML and SOAP experts.

Mindreef SOAPscope Server was initially called Mindreef Coral, and was re-released under the current name in mid 2006. Like many software testing tools, this collaborative testing product includes a “play” button when Web services are exercised based on specific scenarios. If services for some steps of the process scenario are not available, SOAPscope Server can even simulate them.

The collaborative team lifecycle support comes by means of a “playback” feature that shows what happened at each step along the way, so that different members of the team can inspect for their respective areas of concern. For instance, developers can check for syntax errors, while architects can test if a service that has been invoked many times could still eventually trigger a scenario that violates company policies.

Taming the SOA Beast – Part 1

Certainly, I admit to not being a programmer or a techie expert (not to use somewhat derogatory words like “geek” or “nerd”) per se. Still, my engineering background and years of experience as a functional consultant should suffice for understanding the advantages and possible perils of service oriented architecture (SOA).

On one hand, SOA’s advantages of flexibility (agility), components’ reusability and standards-based interoperability have been well publicized. On the other hand, these benefits come at a price: the difficulty of governing and managing all these mushrooming “software components without borders”, as they stem from different origins and yet are able to “talk to each other” and exchange data and process steps, while being constantly updated by their respective originators (authors, owners, etc.).

At least one good (or comforting) fact about the traditional approach to application development was that old monolithic applications would have a defined beginning and end, and there was always clear control over the source code.

Instead, a new SOA paradigm entails composite applications assembled from diverse Web services (components) that can be written in different languages, and whose source code is hardly ever accessible by the consuming parties (other services). In fact, each component exposes itself only in terms what data and processes it needs as an input and what it will return as an output, but what goes “under the hood” remains largely a “black box” or someone’s educated guess at best.

Consequently, SOA causes radical changes in the well-established borders (if not their complete blurring) of software testing, since runtime (production) issues are melding with design-time (coding) issues, and the traditional silos between developers, software architects and their quality assurance (QA) peers appear to be diminishing when it comes to Web services.

Transparency is therefore crucial to eliminate the potential chaos and complexity of SOA. Otherwise, the introduction of SOA will have simply moved the problem area from a low level (coding) to a higher level (cross-enterprise processes), without a reduction in problems. In fact, the problems should only abound in a distributed, heterogeneous multi-enterprise environment.

Then and Now

Back to the traditional practices and mindset: the software world considers design as development-centric (i.e., a “sandbox” scenario), and runtime as operation-centric (i.e., a part of a real-life customer scenario). But with SOA that distinction blurs, since Web services are being updated on an ongoing basis, thus magnifying the issues of recurring operations testing and management.

Namely, companies still have to do component-based software testing (to ascertain whether the code is behaving as expected) at the micro (individual component) level, but there is also application development at the macro (business process) level, since composite applications are, well, composed of many disparate Web services. In other words, programmers are still doing traditional development work, but now that development work becomes involved in infrastructure issues too.

For instance, what if a Web service (e.g., obtaining exchange rates, weather information, street maps information, air flight information, corporate credit rating information, transportation carrier rates, etc.), which is part of a long chain (composite application), gets significantly modified or even goes out of commission? To that end, companies should have the option of restricting the service’s possibly negative influence in the chain (process) until a signaling mechanism is in place, which can highlight changes that may compromise the ultimate composite application.

Functional testing in such environments is a challenge because, by nature, Web services are not visual like conventional, user-facing software applications. In place of a front-end or user interface (UI), some astute testing software can overlay a form that allows team members to see the underlying schema (data structure) of the Web service being tested.

Furthermore, testing SOA applications is problematic since it is not only difficult for a company to know if a particular Web service will deliver on its “contract”, but also, even if it does, whether it will maintain the company’s adopted standards of performance (e.g., under increased loads) and security while complying with its adopted regulatory policies.

Thus, modern SOA software testing tools increasingly provide support for multiple roles, whereby architects can codify policies and rules, developers check for compliance during the test cycle, and support and operations staff can check for compliance issues when problems occur. The new crop of SOA testing tools also increasingly support a range of tests, including functional and regression testing, interoperability testing, and policy conformance. Contrary to traditional software testing tools that inspect code, Web services testing tools deal with the quality of the extensible markup language (XML) messaging layer.

And although both traditional and Web services testing tools deal with syntax, for Web services team members require higher-level awareness of business rules and service policies. This is owing to the highly distributed SOA environment that makes keeping track of changes difficult and underscores the new SOA management complexity.

In fact, change management in pre- and post-application development is essential to filter out redundant changes, prioritize changes, and resolve conflicting changes. But also, if a certain message between the points A and B doesn’t pass in a real-life scenario, there has to be awareness of what needs to be done to rectify it now and in the future.

The abovementioned examples of numerous problems inherent in SOA have caused the previously mentioned silo-ed areas to now come much closer to each other. These are the following: software lifecycle management, applications performance management and information technology (IT) governance, with change management acting as a core information source on all changes in the environment. This union should enable companies to discover which Web services and components exist, who the owners are, and which services and components are actually consumed and by which applications/business processes.

It’s About Process (or Ability to be Responsive) — Part II

Full-fledged BPM system components thus include visual process modeling: a graphical depiction of a process that becomes a part of the application and governs how the business process performs when companies run the application.

They also feature Web and systems integration (SI) technologies, which include displaying and retrieving data via a Web browser and which enable companies to orchestrate the necessary people and legacy applications into their processes.

Another important BPM component is what’s been termed business activity monitoring (BAM), which gives reports on exactly how (and how well) the business processes and flows are working (for more information, see TEC’s article entitled “Business Activity Monitoring - Watching The Store For You”).

Optimizing processes that involve people and dynamic change has been traditionally difficult, and one barrier to optimization has been the lack of visibility and ownership for processes that span functional departments or business units, let alone different enterprises. In addition, the industry often changes faster than information technology (IT) departments can update the applications set that the business relies on to do its work, thus stifling innovation, growth, performance and so on.

But today, the pervasiveness of Web browsers and the emergence of simpler application integration technologies such as Web sevices, simple object access protocol (SOAP), extensible markup language (XML), business process execution language (BPEL), etc. have enabled IT staff to deploy technology that supports the business process across functional, technical and organizational silos.

In the broadest sense, BPM components address the issues of the following: process modeling, documentation, certification, collaboration, compliance, optimization, and automation (i.e., via a workflow engine that is rule-based).

Again, highly functional, top-of-the-range BPM suites use graphical (visual) process modeling tools that enable business users and business analysts (i.e., those people that are most familiar with the process) to implement and manage the process definition. To complete any transaction, the BPM suite must also call on various siloed legacy applications that hold necessary information, for example, customer, inventory or logistics data.

But to the ordinary user the complex process that runs over many enterprises and various systems should appear seamless. End-users should be spared the effort of hunting down the scattered information themselves, since the underlying BPM platform provides tools for:

* Business analysts to model (and change) the business processes and define the business rules that control how those processes behave;
* IT departments to integrate the necessary legacy systems;
* Joint teams to build applications for the end user that enforce the processes and rules; and
* Management to review process performance (e.g., the required time to resolve client return exceptions) and even adjust process parameters in real-time (e.g., increasing the dollar value threshold during peak periods to trigger management review and approvals of client returns).

Therefore, the most vital BPM attributes would be the following: being event-driven, orchestrated, intended for both internal and external processes/customers, and leveraging human-centric workflow and business analytics.

With the leading BPM platforms/suites, everyone in the company will be working on the same shared data and process model, so changes to the process can be put into action very quickly. This is because these sophisticated platforms provide integrated process modeling, real-time process monitoring, and Web-based management reporting — all working in unison to support rapid process innovation.

BPM — Much More than Integration

BPM is often used to integrate multiple enterprise applications and various internal and external users into a new process, but it goes way beyond mere integration. Whereas traditional enterprise application integration (EAI) products help companies to move data between applications, BPM adds interaction with people and the ability to support processes, which then become as manageable as data.

BPM integrates existing applications, Web services and people in order for companies to quickly change, destruct or construct processes as required. Again, BPM enables a company to more cost-effectively and quickly model and change its business processes to meet the specific requirements of a particular business. Via BPM, people can be involved in two ways:

It’s About Process (or the Ability to be Responsive) — Part III

To that end, Webcom Inc. has leveraged its vast expertise earned while addressing many complex sales quote-to-order (Q2O) process issues (i.e., channel quote approvals, special pricing approvals, special non-standard product feature request approvals, etc.) and has created a brand new workflow engine, which can be (and is already) used for many generic business processes.

Such examples of processes would be: RMA (Return Material/Merchandize Authorization), NFR (New Feature Request), ECN (Engineering Change Notice), NPR (New Product Release), Bug Tracking, Engineering Change Request, and many other business processes that require approval steps.

The Ability to Respond, On-demand

In May 2008, Webcom announced the availability of ResponsAbility, its newest offering addressing the case management and workflow processing areas. ResponsAbility is designed to speed the “time-to-resolution” process, eliminate unnecessary time delays and improve overall value chain communications and productivity through improved transparency and collaboration.

The idea behind this case management and workflow solution was to help organizations keep their projects on track and their employees on the same page, thereby making the lives of internal and external team members much less complicated (and more productive and enjoyable).

This straightforward application provides a central location (repository) for managing the key aspects of many types of cases, including product and service defects, customer and supplier complaints, non-conformance issues, health and safety incidents, and RMAs. Separate tabs keep key information within easy reach, whereby team members can log issues as they arise, prioritize them, and update their status as appropriate.

Built-in reports let users see open issues by project, projects by stage, and many other categories. On a proactive side, the tool can be leveraged by companies to create and implement corrective and preventive actions (CAPA) and to support a plethora of regulatory and compliance requirements. All in all, users that have always had the responsibility now have the “ability to respond”, as required.

This case management software may not currently have all the bells-and-whistles associated with full-fledged BPM packages, such as programmatically driving a workflow engine, visual process modeling, process monitoring and optimization, or automatic task allocation based on workload. Still, it seems well suited for small and medium size companies, who can leverage such a software tool with an intuitive user interface (UI), for handling many, if not all of their processes, in an incremental manner.

The design and enforcement of processes is enabled because both administrators and end-users are able to design workflows, notifications, and data collection forms, as well as setting up permissions accordingly. The system manages cases by ushering each case through the resolution process, and by tracking the progress of each case throughout the entire process.

The multi-tenant software as a service (SaaS) delivery model ensures that a customer can be up and running quickly with all of the selected critical processes being modeled and functional. No onsite deployment is necessary and the software only requires a Web browser and some modest to minimal data and process setup to be up and running.

Brethren Software Vendors as Likely ResponsAbility Users?

For example, a software development company can deploy this tool within a day or two and allow its customers to report bugs. This information can then be internally routed according to a customized workflow to the support department, then to the engineering and testing staff, and then back to the customer for approval and case closure.

To elaborate, the Software Bug workflow logically starts with the customer reporting a software bug. Then a default assignee at the software vendor reviews it, and then either resolves it on the spot (hopefully) or assigns it to the software engineering staff by providing a test case. Then the software engineering team determines a cause for the bug and either provides a workaround, fully fixes the bug, or determines that the software behaves as designed after all.

At the same time, ResponsAbility can be used to allow customers to create new feature requests, which are then routed via a different customized workflow starting from project management, via development, release scheduling, back to development, quality assurance (QA), documentation (technical writers), product management, and finally to marketing teams.

Again, if the bug can be fixed, the case is assigned to the testing staff, back to the support team, and finally back to the customer for approval and case closure. But, if the issue turns out not to be the bug after all, the case is then converted to a new feature request and follows an entirely different workflow.

To that end, the New Product Feature Request process starts with customers, sales & service people, channels and/or product managers requesting a new feature. Often, the existing users (install base special interest groups [SIGs]) are allowed to vote on it, and based on the number of votes and other factors, some new features are assigned to the engineering department to estimate the effort entailed to implement the requested feature.

Based on the estimate and other criteria, some new features are then assigned to the engineering or research and development (R&D) departments for implementation. Upon implementation, the new feature is assigned to the QA department for testing and approvals. Finally, based on the QA results, a new feature is returned back to engineering for a rework or is scheduled for production (or general availability).

Apparently, various instances of a process (called cases) can be changed midstream. For example, something that was initially entered as a bug upon investigation may be classified as an expected behavior. The customer who did not expect such behavior from the software can then change a case type of this instance from a bug to a new feature request, without having to re-enter any information and this case will then follow the prescribed new feature workflow process.

Also, a built-in notification and permissions engine ensures that all communication and collaboration happens within ResponsAbility, so everybody is aware of anything that anybody ever stated about the case via comments, file attachments, etc.

Unlike some of the simple issue tracking software packages mentioned in Part II, ResponsAbility can be used not only for tracking things, but also for enforcing a process in order to ensure that things get done correctly. For example, a workflow engine can be set up to make sure that a process status cannot be changed from “bug fixed” to “in testing” until a concrete test case scenario is provided by a user via customizable online forms.

Webcom — “Eating Own Dog Food”

It might be interesting to note that Webcom, as a software developer itself, has since late 2006 been using ResponsAbility internally for its older sibling WebSource CPQ product’s bug tracking and new product features introduction.

The traditional model, whereby the dedicated product/project manager and support staff were the only bidirectional conduit between the client’s team (i.e., WebSource CPQ users and administrators, local project manager, application owners, stakeholders, etc.) and Webcom’s team (i.e., developers, modelers, QA, consultants, product managers, etc.), has over time been shown to have many disadvantages.

Namely, despite the dedicated project manager’s intimate knowledge of the individual client’s installation and the established relationship and hand-holding comfort level, the challenges have repeatedly been the bottleneck nature of the dedicated project management and support team, with no significant value being added by this additional layer of communication.

Other disadvantages would be the all too often “black hole” syndrome due to the lack of a single project/client/tasks/issues depository. Therefore, priorities are often managed on an inefficient (and often redundant or conflicting) one-to-one basis.

The advantages of the new support model, with ResponsAbility providing a single repository of all cases (in a hub-and-spoke manner), start with collaboration and the ability for all parties to both instantly contribute to the case/task/issue and have instant visibility into the case status. Also, new resources that include clients, Webcom employees and third-parties (partners) can all immediately participate and be notified, while the enabler for everyone is also an advanced searching capability within the system.

The Webcom Q2O clients’ adoption was initially somewhat tepid due to the ingrained human habit of emailing or calling directly the preferred contact or due to the clients having their own issue tracking systems. Of course, there is always the need for a human touch and chatting (as a “bonus”) with Webcom associates about the “critical” issues like a “lovely” winter weather in Wisconsin or about the Green Bay Packers’ revival.

Nonetheless, joking apart, from the end of 2007 ResponsAbility has been the sole vehicle for communication, tracking and managing tasks and cases at Webcom. Prior to that, Webcom had used the JIRA issue tracking system, which at the time allowed users to create a workflow based on a set of offered statuses.

However, at the time (the things might have meanwhile changed though) there was not the user’s ability to create statuses and workflows at will. For instance, the offered statuses were “open,” “in progress,” “closed,” etc., but the user could not create a custom status like “material returned”, “in engineering”, “being analyzed” or so.

Further, users could add custom fields, but they could not design forms in a drag-and-drop fashion. There was no way to specify forms and fields for each action (task) either, so that, e.g., when the process passes from the “bug fixed” into the “in testing” phase, the user could not create a mandatory field named “test case.” While administrators had ample controls, the end users had very little control over what fields they could see on the screen, and so on.

Key ResponsAbility Design Tenets

In contrast, ResponsAbility was built with several design concepts in mind, starting with scalability in terms of users’ ability to create an unlimited number of cases, processes, statuses, status transitions, custom fields, users, user types, departments, etc.

Comparing Business Intelligence and Data Integration Best-of-breed Vendors' Extract Transform and Load Solutions

To understand the relevance of extract transform and load (ETL) components and how they fit into business intelligence (BI), one should first appreciate what data integration is and the significance of having clean, accurate data that enable successful business decisions. Within the BI industry, data integration is essential. By capturing the right information, organizations are able to perform analyses, create reports, and develop strategies that help them to not only survive, but, more importantly, to thrive.

Informatica, a leading provider of enterprise data integration software, defines data integration as "the process of combining two or more data sets together for sharing and analysis, in order to support information management inside a business". In BI terms, this means that data is extracted in its original form and stored in an interim location, where it is transformed into the format that will be used in the data warehouse. The transformation process includes validating data (e.g., filling in null zip code information in the customer database) and reformatting data fields (e.g., separating Last Name and First Name fields of customer records that are merged in one database but not others). The next step is to load the data into the data warehouse. The data is then used to create queries and data analysis builds, such as on-line analytical processing (OLAP) cubes and scorecard analyses. In a sense, extracting the proper data, transforming it by cleansing and merging records, and loading it into the target database is what allows BI solutions to build analytical tools successfully. It is also the essence of ETL functionality.

Data Integration Components

In order to determine the most suitable ETL solution for them, organizations should evaluate their needs in terms of the core components of the data integration process, as listed below.

* Data Identification. What data does the organization need to extract and where does it come from? What end result, in terms of the data, does the organization want to analyze? Essentially, answering these questions means identifying the origin of the data, and what the relationship is between the different data sources.

* Data Extraction. How frequently does the organization require the data? Is it monthly, weekly, daily, or hourly? Where should data storing and transformation activities occur (i.e., on a dedicated server or in the data warehouse, etc.)? Considering these factors identifies the data frequency needs of the organization. For example, analysis of sales data may require the organization to load data monthly or quarterly, whereas some other data transfers may be performed multiple times a day. In determining the frequency of the data loading and transformation in the data warehouse or on the dedicated server, the organization should also consider the amount of data to be transferred and its effect on product performance.

* Data Standardization. What is the format of the organization's data, and is it currently compatible with the same data elements in other systems? For example, if the organization wants to analyze customer information and to merge customer buying patterns with customer service data, it must know if the customer is identified in the same way in both places (e.g., by customer identification [ID], phone number, or first and last name). This is crucial for ensuring that the correct data is merged and that the data is attached to the right customer throughout the data standardization process. Another data standardization issue the organization should deal with is identifying how it will manage data cleansing and data integrity functions within the data warehouse over time.

* Data Transformation. The organization should consider data transformation requirements and the interaction between the transformed data components. The critical questions are how will the data be reflected in the new database, and how will that data be merged on a row by row basis? Answering these questions involves identifying the business and data rules associated with the data to ensure accuracy in data loads.

Your Guide to Enterprise Software Selection: Part One

IT acquisition and purchasing decisions are often conducted in an atmosphere of unmet expectations, internal political agendas, vendor promises, and brand name hype. Decisions are driven by executive mandate, rule-of-thumb, or insufficient analyses based on rudimentary spreadsheet comparisons.

This is a sure recipe for failure, as demonstrated by the horror stories published continually in trade magazines and the press. We'll describe a best-practice approach to the assessment, evaluation, and selection of software—and show you how you can reduce the time and cost involved in objectively choosing the right solution.

There are three main phases within Technology Evaluation Centers' (TEC's) software assessment, evaluation, and selection methodology:

Phase 1: Defining Business and Technical Requirements
Phase 2: Software Evaluation and Analysis
Phase 3: Negotiation and Final Selection

Overview

Phase 1
TEC's methodology establishes the foundation for the ultimate success of the selection project. Successful evaluation and analysis of a system—and negotiation with a vendor—are irrelevant if the initial definition of business and technical requirements are incomplete or inaccurate. In many software selection projects, there is not enough emphasis on the importance of this phase, which causes many failures, and can even result in disaster for companies during and after implementation.

TEC's decision support system facilitates fast and accurate compilation of business processes, and maps them to the features and functions of a software solution. By closely following the steps outlined within this phase, an organization can produce a complete and understandable specification of all the needs that are to be addressed by the new solution, and is able to keep the assembled data in one easily accessible repository.

Phase 2
The evaluation and analysis of vendor solutions should proceed from finding the right vendors through to selecting a shortlist of two or three finalists. The sheer mass of data collected during this phase can be overwhelming for any organization, and the manipulation of the data even more daunting.

There may be as many as 20 or 30 qualified vendors, and each may have a list of thousands of criteria, all of which have to be evaluated one against the other. Using traditional methods can lead to serious errors—and may lead to choosing the wrong vendor solution. We'll show you how TEC's decision support system alleviates this process and seriously reduces the time required to reach a more informed and accurate choice of the right vendors to include in the shortlist.

Phase 3
The final phase covers the steps within the negotiation and the final selection process with the short-listed vendors. This includes live vendor demonstrations at the client site, where each solution can be rated by the business and selection team to verify ease-of-use, coverage of critical business processes, and functionality.

During this phase, we suggest that your selection team seek out client references from each vendor to verify their implementation, service, support, and training experiences. We'll explain how TEC's decision support system facilitates and shortens this process by loading vendor information into TEC's comparison tool to produce reports and graphs, which will support your selection team's final recommendations.

Justification of ERP Investments Part 1: Quantifiable Benefits from an ERP System

Studies that surveyed manufacturers about the impact of ERP systems on firm performance indicate that company size and industry do not affect the results. Benefits have been indicated for large and small firms, whether they make standard or custom products or are in discrete or process manufacturing environments. This section explains the quantifiable benefits in terms of several areas of improvement.


Typical Benefits

The most significant quantifiable benefits involve reductions in inventory, material costs, and labor and overhead costs, as well as improvements in customer service and sales.

Inventory reduction. Improved planning and scheduling practices typically lead to inventory reductions of 20 percent or better. This provides not only a one time reduction in assets (and inventory typically constitutes a large proportion of assets), but also provides ongoing savings of the inventory carrying costs. The cost of carrying inventory includes not only interest but also the costs of warehousing, handling, obsolescence, insurance, taxes, damage, and shrinkage. With interest rates of 10 percent, the carrying costs can be 25 percent to 30 percent.

ERP systems lead to lower inventories because manufacturers can make and buy only what is needed. Demands rather than demand insensitive order points drive time phased plans. Deliveries can be coordinated to actual need dates; orders for unneeded material can be postponed or canceled. The bills of material ensure matched sets are obtained rather than too much of one component and not enough of another. Planned changes in the bills also prevent inventory build up of obsolete materials. With fewer part shortages and realistic schedules, manufacturing orders can be processed to completion faster and work-in-process inventories can be reduced. Implementation of JIT philosophies can further reduce manufacturing lead times and the corresponding inventories.

Material cost reductions. Improved procurement practices lead to better vendor negotiations for prices, typically resulting in cost reductions of 5 percent or better. Valid schedules permit purchasing people to focus on vendor negotiations and quality improvement rather than on expediting shortages and getting material at premium prices. ERP systems provide negotiation information, such as projected material requirements by commodity group and vendor performance statistics. Giving suppliers better visibility of future requirements helps them achieve efficiencies that can be passed on as lower material costs.

Labor cost reductions. Improved manufacturing practices lead to fewer shortages and interruptions, and less rework and overtime. Typical labor savings from successful ERP are a 10 percent reduction in direct and indirect labor costs. By minimizing rush jobs and parts shortages, less time is needed for expediting, material handling, extra setups, disruptions, and tracking split lots or jobs that have been set aside. Production supervisors have better visibility of required work and can adjust capacity or loads to meet schedules. Supervisors have more time for managing, directing and training people. Production personnel have more time to develop better methods and improve quality and throughput.

Improved customer service and sales. Improved coordination of sales and production leads to better customer service and increased sales. Improvements in managing customer contacts, in making and meeting delivery promises, and in shorter order to ship lead times, lead to higher customer satisfaction and repeat orders. Sales people can focus on selling instead of verifying or apologizing for late deliveries. In custom product environments, configurations can be quickly identified and priced, often by sales personnel or even the customer rather than technical staff. Taken together, these improvements in customer service can lead to fewer lost sales and actual increases in sales, typically 10 percent or more.

ERP systems also provide the ability to react to changes in demand and diagnose delivery problems. Corrective actions can be taken early, such as determining shipment priorities, notifying customers of changes to promised delivery dates, or altering production schedules to satisfy demand.

Improved accounting controls. Improved collection procedures can reduce the number of days of outstanding receivables, thereby providing additional available cash. Underlying these improvements are fast accurate invoice creation directly from shipment transactions, timely customer statements, and follow through on delinquent accounts. Credit checking during order entry and improved handling of customer inquiries further reduces the number of problem accounts. Improved credit management and receivables practices typically reduce the days of outstanding receivables by 18 percent or better.

ERP System Benefits on the Balance Sheet

Benefits from improved business processes and improved information provided by an ERP system can directly affect the balance sheet of a manufacturer. To illustrate this impact, a simplified balance sheet is shown in figure 3.1 for a typical manufacturer with annual revenue of $10 million. The biggest impacts will be on inventory and accounts receivable.

In the example, the company has $3 million in inventory and $2 million in outstanding accounts receivable. Based on prior research concerning industry averages for improvements, implementation of an ERP system can lead to a 20 percent inventory reduction and an 18 percent receivables reduction.

Figure 3.1 Summarized balance sheet for a typical $10 million firm

Typical
Current Improvement Benefit Current assets
Cash and other 500,000
Accounts receivable 2,000,000 18% 356,200

Inventory 3,000,000 20% 600,000

Fixed assets 3,000,000


Total assets $8,500,000 $956,200
Current liabilities
xxx,xxx

* Inventory Reduction. A 20 percent inventory reduction results in $600,000 less inventory. Improved purchasing practices (that result in reduced material costs) could lower this number even more.

* Accounts Receivable. Current accounts receivable represent seventy-three days of outstanding receivables. An 18 percent reduction (to sixty days' receivables) results in $356,200 of additional cash available for other uses.

Justification of ERP Investments Part 1: Quantifiable Benefits from an ERP System

Studies that surveyed manufacturers about the impact of ERP systems on firm performance indicate that company size and industry do not affect the results. Benefits have been indicated for large and small firms, whether they make standard or custom products or are in discrete or process manufacturing environments. This section explains the quantifiable benefits in terms of several areas of improvement.


Typical Benefits

The most significant quantifiable benefits involve reductions in inventory, material costs, and labor and overhead costs, as well as improvements in customer service and sales.

Inventory reduction. Improved planning and scheduling practices typically lead to inventory reductions of 20 percent or better. This provides not only a one time reduction in assets (and inventory typically constitutes a large proportion of assets), but also provides ongoing savings of the inventory carrying costs. The cost of carrying inventory includes not only interest but also the costs of warehousing, handling, obsolescence, insurance, taxes, damage, and shrinkage. With interest rates of 10 percent, the carrying costs can be 25 percent to 30 percent.

ERP systems lead to lower inventories because manufacturers can make and buy only what is needed. Demands rather than demand insensitive order points drive time phased plans. Deliveries can be coordinated to actual need dates; orders for unneeded material can be postponed or canceled. The bills of material ensure matched sets are obtained rather than too much of one component and not enough of another. Planned changes in the bills also prevent inventory build up of obsolete materials. With fewer part shortages and realistic schedules, manufacturing orders can be processed to completion faster and work-in-process inventories can be reduced. Implementation of JIT philosophies can further reduce manufacturing lead times and the corresponding inventories.

Material cost reductions. Improved procurement practices lead to better vendor negotiations for prices, typically resulting in cost reductions of 5 percent or better. Valid schedules permit purchasing people to focus on vendor negotiations and quality improvement rather than on expediting shortages and getting material at premium prices. ERP systems provide negotiation information, such as projected material requirements by commodity group and vendor performance statistics. Giving suppliers better visibility of future requirements helps them achieve efficiencies that can be passed on as lower material costs.

Labor cost reductions. Improved manufacturing practices lead to fewer shortages and interruptions, and less rework and overtime. Typical labor savings from successful ERP are a 10 percent reduction in direct and indirect labor costs. By minimizing rush jobs and parts shortages, less time is needed for expediting, material handling, extra setups, disruptions, and tracking split lots or jobs that have been set aside. Production supervisors have better visibility of required work and can adjust capacity or loads to meet schedules. Supervisors have more time for managing, directing and training people. Production personnel have more time to develop better methods and improve quality and throughput.

Improved customer service and sales. Improved coordination of sales and production leads to better customer service and increased sales. Improvements in managing customer contacts, in making and meeting delivery promises, and in shorter order to ship lead times, lead to higher customer satisfaction and repeat orders. Sales people can focus on selling instead of verifying or apologizing for late deliveries. In custom product environments, configurations can be quickly identified and priced, often by sales personnel or even the customer rather than technical staff. Taken together, these improvements in customer service can lead to fewer lost sales and actual increases in sales, typically 10 percent or more.

ERP systems also provide the ability to react to changes in demand and diagnose delivery problems. Corrective actions can be taken early, such as determining shipment priorities, notifying customers of changes to promised delivery dates, or altering production schedules to satisfy demand.

Improved accounting controls. Improved collection procedures can reduce the number of days of outstanding receivables, thereby providing additional available cash. Underlying these improvements are fast accurate invoice creation directly from shipment transactions, timely customer statements, and follow through on delinquent accounts. Credit checking during order entry and improved handling of customer inquiries further reduces the number of problem accounts. Improved credit management and receivables practices typically reduce the days of outstanding receivables by 18 percent or better.

Trade credit can also be maximized by taking advantage of supplier discounts and cash planning, and paying only those invoices with matching receipts. This can lead to lower requirements for cash-on-hand.

How to Define Your Business and Technical Requirements

Typical enterprise application selections begin with little mention of technology, since the first consideration is modeling the desired business processes that the new technology will enable, and then matching them to the functional requirements within any given software solution. TEC uses a standardized methodology to model and match these processes. The following steps are critical to ensuring overall success within this phase.

Step 1: Form a Cross-functional Project Team

A cross-functional team ensures that both the business and technical needs of your organization are addressed, and that each group affected by the changes understands the impact of the decision. The ideal team consists of members of the following groups: management; finance or business operations; users; consultants; and members of the IT operations and infrastructure groups.

Champions and subject matter experts (SMEs) should be chosen from each business area to work with the project team. This will ensure complete buy-in from the business side and help promote the new solution within the rest of the organization, as well as provide expert knowledge within the project team on existing processes and day-to-day operations.

Step 2: Model Business Processes Hierarchy through an Internal Needs Assessment

The project team, with the help of the champions and the SMEs, is responsible for defining and modeling business processes. The first goal is to determine the main process groups, which correspond to the individual business areas of the organization.

Within these groups, processes correspond to the high-level divisions of your business areas (see figures 1 and 2 below). Within these processes, subprocesses detail the main departments of the high-level divisions. Subprocesses include the day-to-day tasks within each department. For each activity, there may be business-based rules describing how these day-to-day tasks are to be performed and controlled.

This large volume of data is difficult to track, organize, and manipulate using traditional methods, such as spreadsheets, word documents, and flow charts. But if this critical information is not properly stored, organized, or made easily accessible, it can cause huge time delays—which in turn can substantially increase the cost of the software selection project.


(Click here for larger version)
Figure 1: Process group chart

Your Guide to Enterprise Software Selection: Part One

IT acquisition and purchasing decisions are often conducted in an atmosphere of unmet expectations, internal political agendas, vendor promises, and brand name hype. Decisions are driven by executive mandate, rule-of-thumb, or insufficient analyses based on rudimentary spreadsheet comparisons.

This is a sure recipe for failure, as demonstrated by the horror stories published continually in trade magazines and the press. We'll describe a best-practice approach to the assessment, evaluation, and selection of software—and show you how you can reduce the time and cost involved in objectively choosing the right solution.

There are three main phases within Technology Evaluation Centers' (TEC's) software assessment, evaluation, and selection methodology:

Phase 1: Defining Business and Technical Requirements
Phase 2: Software Evaluation and Analysis
Phase 3: Negotiation and Final Selection

Overview

Phase 1
TEC's methodology establishes the foundation for the ultimate success of the selection project. Successful evaluation and analysis of a system—and negotiation with a vendor—are irrelevant if the initial definition of business and technical requirements are incomplete or inaccurate. In many software selection projects, there is not enough emphasis on the importance of this phase, which causes many failures, and can even result in disaster for companies during and after implementation.

TEC's decision support system facilitates fast and accurate compilation of business processes, and maps them to the features and functions of a software solution. By closely following the steps outlined within this phase, an organization can produce a complete and understandable specification of all the needs that are to be addressed by the new solution, and is able to keep the assembled data in one easily accessible repository.

Phase 2
The evaluation and analysis of vendor solutions should proceed from finding the right vendors through to selecting a shortlist of two or three finalists. The sheer mass of data collected during this phase can be overwhelming for any organization, and the manipulation of the data even more daunting.

There may be as many as 20 or 30 qualified vendors, and each may have a list of thousands of criteria, all of which have to be evaluated one against the other. Using traditional methods can lead to serious errors—and may lead to choosing the wrong vendor solution. We'll show you how TEC's decision support system alleviates this process and seriously reduces the time required to reach a more informed and accurate choice of the right vendors to include in the shortlist.

Phase 3
The final phase covers the steps within the negotiation and the final selection process with the short-listed vendors. This includes live vendor demonstrations at the client site, where each solution can be rated by the business and selection team to verify ease-of-use, coverage of critical business processes, and functionality.

During this phase, we suggest that your selection team seek out client references from each vendor to verify their implementation, service, support, and training experiences. We'll explain how TEC's decision support system facilitates and shortens this process by loading vendor information into TEC's comparison tool to produce reports and graphs, which will support your selection team's final recommendations.

Comparing Business Intelligence and Traditional ETL

Until recently, ETL involved uploading data at regular (i.e., monthly or weekly) time intervals to drive business performance decisions and identify business opportunities. However, as BI tools become more integrated with overall business functions, including business performance management (BPM) and reporting and analysis requirements, data needs have shifted from monthly or weekly intervals to real time updates. This means that it has become more important for data transfers to accurately reflect real time business transactions, and that there has been an increase in the amount of data transfers required.

Nonetheless, real time ETL doesn't necessarily refer to automatic data transfer as operational databases are updated. In terms of BI, real time may mean different things to different organizations or even different departments within these organizations. Take, for instance, an automotive manufacturer whose traditional data warehouse solutions (OLAP cubes, etc.) involved capturing data at a given point in time. The automotive manufacturer might, for example, have wanted to track and compare monthly sales with last year's sales during the same month by region, car model, and dealer size, thus requiring the data warehouse to be updated on a monthly basis. However, as the manufacturer's business decisions evolved based on this analysis, its data needs shifted from a monthly requirement to a weekly one, and on to an ever more frequent basis, eventually creating the demand for real time data. In the case of the automotive manufacturer, real time data may be useful for identifying the movement of car parts within a warehouse relative to their storage locations and comparing this information with the demand for these parts.

Such a shift in data requirements affects both the volume of data required and when the data loading occurs. The end result is that, in order to meet the changing needs of user organizations, ETL and BI vendors have concentrated on moving towards real time ETL and shifting their data loading functionality to accommodate higher volumes of data transfer.

Comparing Business Intelligence and Data Integration Best-of-breed Vendors' Extract Transform and Load Solutions

To understand the relevance of extract transform and load (ETL) components and how they fit into business intelligence (BI), one should first appreciate what data integration is and the significance of having clean, accurate data that enable successful business decisions. Within the BI industry, data integration is essential. By capturing the right information, organizations are able to perform analyses, create reports, and develop strategies that help them to not only survive, but, more importantly, to thrive.

Informatica, a leading provider of enterprise data integration software, defines data integration as "the process of combining two or more data sets together for sharing and analysis, in order to support information management inside a business". In BI terms, this means that data is extracted in its original form and stored in an interim location, where it is transformed into the format that will be used in the data warehouse. The transformation process includes validating data (e.g., filling in null zip code information in the customer database) and reformatting data fields (e.g., separating Last Name and First Name fields of customer records that are merged in one database but not others). The next step is to load the data into the data warehouse. The data is then used to create queries and data analysis builds, such as on-line analytical processing (OLAP) cubes and scorecard analyses. In a sense, extracting the proper data, transforming it by cleansing and merging records, and loading it into the target database is what allows BI solutions to build analytical tools successfully. It is also the essence of ETL functionality.

Data Integration Components

In order to determine the most suitable ETL solution for them, organizations should evaluate their needs in terms of the core components of the data integration process, as listed below.

* Data Identification. What data does the organization need to extract and where does it come from? What end result, in terms of the data, does the organization want to analyze? Essentially, answering these questions means identifying the origin of the data, and what the relationship is between the different data sources.

* Data Extraction. How frequently does the organization require the data? Is it monthly, weekly, daily, or hourly? Where should data storing and transformation activities occur (i.e., on a dedicated server or in the data warehouse, etc.)? Considering these factors identifies the data frequency needs of the organization. For example, analysis of sales data may require the organization to load data monthly or quarterly, whereas some other data transfers may be performed multiple times a day. In determining the frequency of the data loading and transformation in the data warehouse or on the dedicated server, the organization should also consider the amount of data to be transferred and its effect on product performance.

* Data Standardization. What is the format of the organization's data, and is it currently compatible with the same data elements in other systems? For example, if the organization wants to analyze customer information and to merge customer buying patterns with customer service data, it must know if the customer is identified in the same way in both places (e.g., by customer identification [ID], phone number, or first and last name). This is crucial for ensuring that the correct data is merged and that the data is attached to the right customer throughout the data standardization process. Another data standardization issue the organization should deal with is identifying how it will manage data cleansing and data integrity functions within the data warehouse over time.

* Data Transformation. The organization should consider data transformation requirements and the interaction between the transformed data components. The critical questions are how will the data be reflected in the new database, and how will that data be merged on a row by row basis? Answering these questions involves identifying the business and data rules associated with the data to ensure accuracy in data loads.

Friday, November 27, 2009

Managing the Aches and Pains of Long Cycle Times: Automating Controls for Pharmaceutical Manufacturers

One of the biggest challenges (or business pain points) for pharmaceutical manufacturers (or life sciences companies) is the long cycles that are required for research and development (R&D) and product approval. This is particularly a challenge for manufacturers of generic drugs, for which cycle times can average 20 months or more (and the full time-to-market period upwards of 12 years).

Why are long cycles a problem?

Simply put, it comes down to the familiar equation that “time = money.” More time needed means more capital spent, and manufacturers watch their bottom lines slip farther and farther away. To begin to formulate a plan to address the issue of long cycle times, it’s important to understand the factors that contribute to this challenge.

Long R&D cycles happen for a number of reasons. One is that there has been increasing need to comply with regulations, including the Food and Drug Administration’s (FDA’s) Title 21 Code of Federal Regulations (CFR) Part 11, for pharmaceutical manufacturers that are employing methods for electronic record-keeping and electronic and digital signatures.

This increasing need often means that additional administrative time must be spent on ensuring that the technical and procedural protocols are set up correctly and doing what they are supposed to do.

Another reason for long cycle times has to do with the need to ensure that all stages of product development are adequately documented for audits. Whether a manufacturer is using paper or electronic methods of data storage, there must be a reliable, consistent, secure, and accessible method of storing all documents related to the research, development, manufacture, and release of all drugs.

Every change to a document must be retained, and the integrity of the versions kept intact. For manufacturers straddling the line between paper-based and electronic methods, all paper-based documents need to be transferred and saved in digital form, a process that can require considerable time for scanning or manually entering data.

What are the business risks involved in longer R&D cycles and product approval?

Fewer products can be developed or manufactured concurrently, which means fewer products get to market. And fewer products to market can mean a decrease in the company’s in-coming cash flow (i.e. decreased profits). Additional worry may come from the fact that with this increase in time-to-market, other competing manufacturers may develop a similar drug and release it sooner, thereby further diminishing profits due to lost market share and a shortened product life cycle. A delayed or lengthened cycle time can seriously affect the return on investment (ROI) for a given new drug or product.

What can help?
A software solution that implements automated controls that address compliance issues, including 21 CFR Part 11.

How does 21 CFR Part 11 relate to product R&D and approvals?

For all of the processes involved in getting a drug to market, strict policies must be established and followed by a company regarding the use of electronic records. Each step of product R&D and approval processes must be, according to the dictates of 21 CFRR Part 11, consistent, reliable, and repeatable—in other words, each version of every document must be archived and easily retrieved for the purposes of inspection or auditing.

But this thorough documentation means that the approval process can be streamlined with automated functionality, as the time needed to send documents to the approving individual(s) will be reduced (with a centralized system, all users may have access to documents, providing they are authorized to do so according to level-specific electronic signatures; also, the system can be configured to send automatic notifications). Consequently, document turnaround time can be reduced, while the authenticity, integrity, non-repudiation, and confidentiality of documents is assured.

Furthermore, for the purposes of an audit, the automated system can aid a company by streamlining document retrieval. With a system that helps you organize and maintain accurate records of all processes, time isn’t wasted on following a lengthy paper trail of documents to ensure that changes have been authorized and tracked, and that all paper versions are now available.

However, it is very important to realize that using a software application off the shelf to automate all processes involved in electronic signatures, document archiving and change management, and tracking and auditing, will not automatically render your company compliant with 21 CFR Part 11.

You must also ensure that you configure the system so it provides you with the validation you need to be compliant—you must establish rules and policies for the application that are consistently followed so you can be assured your processes for electronic signatures and data management are compliant. Both procedural and administrative controls must be in place to ensure process compliance.

Taming the SOA Beast – Part 2

Mindreef joined the Progress Actional SOA Management product family that provides policy-based visibility, security, and control for services, middleware, and business processes. This acquisition continues Progress’ expansion of its burgeoning SOA portfolio and strengthens the company’s position as a leader in independent, standards-based, heterogeneous, distributed SOA enterprise infrastructures.

Prior to being acquired, Mindreef decoupled some plug-in features from its previously all-in-one SOAPscope Server suite.

One capability was SOAPscope Policy Rules Manager that tests compliance with rules such as whether the Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL) headers comply with the WS-I Basic Profile for Web services interoperability. Also, the feature checks whether the extensible markup language (XML) schema was formed properly, and whether the “contracts” between Web services are valid so that companies can ensure they won’t break at run-time because of faulty logic.

Another plug-in, called Load Check, provides a pre-test simulation of the system’s performance. The underlying idea was to mitigate the bad practice that, when developing Web services-based applications, the load or performance testing tends to be an afterthought that is often compensated for by purchasing extra hardware after the fact and at a hefty price.

Progress Actional + Mindreef

Like its parent, Mindreef has always designed its products as a good fit for third-party IT governance solutions, with the ability to check on whether Web services are well formed and remain consistent with business policies.

Progress does not release the number of customers it has for specific products or as a corporation, although it admits to gaining access to more than 3,000 of Mindreef’s customers at more than 1,200 organizations worldwide. The ideal customers for the combination of Progress Actional and Mindreef SOAPscope are those seeking full life-cycle quality management of their SOA environments, ranging from design through operational deployment.

Mindreef SOAPscope is a recognized testing and validation software product for SOA services at the design stage, while Actional is the market leading SOA management, validation and monitoring software for operational SOA. Thus, the combination of the two provides a solution that is likely to be the first in the market to address the entire SOA lifecycle with SOA quality, validation, and runtime governance.

Progress Actional and Mindreef provide a deep level of SOA management, testing, validation and run-time governance functionality, but not all organizations that have begun implementing SOA environments recognize the need to implement that functionality as yet. As a result, those companies that have felt the significant pain of having to diagnose why SOA composite applications have failed in order to get them rapidly back up and running, or who have discovered rogue Web services within their environments into which they have no visibility, should see the benefit of deploying Progress Actional and Mindreef.

Progress Actional and Mindreef are sold worldwide from offices in North America, Latin America, Europe, and Asia. A complete list of Progress Software offices is available here.

While hardly any player in the market currently has equal lifecycle SOA quality capabilities as the combination of Actional and Mindreef provides, traditional competitors for Actional include Amberpoint, SOA Software, IBM, Hewlett-Packard (HP), Layer 7 Technologies and Computer Associates (CA).

As for Mindreef, while it can also be hard to find a single product that functionally competes head to head with SOAPScope, some other vendors’ functionality is comparable to that found in SOAPScope. Namely, in sales situations, Mindreef sometimes runs across IBM Rational Software and HP/Mercury, and occasionally some of the smaller niche players like Parasoft Solutions, iTKO LISA, PushToTest, and Crosscheck Networks.

Forget Not about Oracle Fusion Either

The recent acquisition of the former middleware competitor, BEA Systems, has promoted Oracle into the middleware market leader, at least in the Java world. The idea behind the ambitiously broad Oracle Fusion Middleware (OFM) suite is the following:

* to enable the enterprise applications’ architecture shift to SOA
* to become a comprehensive platform for developing and deploying service-oriented enterprise applications
* to form the foundation for modernizing and integrating the burgeoning Oracle Applications portfolio

Oracle’s middleware product strategy is foremost to provide a complete (unified) and pre-integrated middleware suite that is also modular, standards-based, open, and thus “hot pluggable.” Furthermore, the strategy is to develop and deploy enterprise applications on the Internet via unifying SOA Management, business process management (BPM), business intelligence (BI), enterprise content management (ECM), and enterprise 2.0 capabilities.

The third part of the strategy, the lowest total cost of ownership (TCO), by managing systems, applications, and user identities on low cost hardware and storage systems, has been too overplayed by virtually all vendors to really ring as differentiating, but is certainly a worthwhile attempt by Oracle.

Asserting SOA Governance Competitiveness

As for the product strategy for the Oracle SOA Governance suite, as a subset of OFM, it starts with offering an integrated and complete lifecycle SOA governance platform entailing tools, service registry and repository, policy manager, monitoring console, and so on.

Additionally, the goal is to enable visibility into an organization’s service portfolio via the ability to discover, categorize, manage change, audit usage, and monitor Web services. Last but not least, as discussed in Part 1, the ultimate goal is to provide better control over the lifecycle of services by enforcing policy compliance from software development to operations.

But what really impressed me post-acquisition was Oracle’s due diligence and even (atypical) humility in admitting BEA’s advantages (e.g., in terms of Enterprise System Bus [ESB] and service mediation capabilities) and bundling it with Oracle’s established capabilities of workflow management and Web services orchestration. Other specific areas where BEA had superior technologies were Java virtual machines, transaction processing monitors and certain security products. Conversely, Oracle has products like BI, ECM and identity management, where BEA did not have products.

Accordingly, Oracle has stratified the combined Oracle and BEA middleware products into the following three groups:

1. Strategic products — BEA products that are being adopted immediately with limited re-design into OFM, since no corresponding Oracle products exist in a majority of cases. Where corresponding Oracle products exist, they will converge with BEA products with rapid integration over next 12-18 months;
2. Continued and converged products – BEA products that are being incrementally re-designed to integrate with OFM. There is gradual integration with existing OFM technology to broaden features with automated upgrades. Oracle hereby grants continued development and maintenance for at least nine years; and
3. Maintenance (a.k.a., “stabilized”) products – those products that even former independent BEA had marked as the end-of-life (EOL) ones due to limited adoption prior to Oracle’s acquisition. Oracle hereby promises continued maintenance with appropriate fixes for five years.

Translating this into the product offerings for Oracle SOA Governance, most of the Oracle and BEA products will end up in the strategic category, starting with BEA AquaLogic Enterprise Repository at the core. It is a repository to capture, share, and change manage SOA artifacts across the lifecycle, with capabilities like audit trail and metrics, service level agreement (SLAs) and policies management, rules and standards definition, WSDL and XML Schema Definition (XSD) schemas, capturing and modeling business requirements, and dependency management.

For its part, Oracle offers Oracle Service Registry, which is a standards-based Universal Description Discovery and Integration (UDDI) v3.0 registry to publish and discover Web services. Furthermore, Oracle Web Services Manager is a policy manager to define and manage security, auditing, and the quality of services (QoS) policies on Web services.

Taming the SOA Beast – Part 1

Certainly, I admit to not being a programmer or a techie expert (not to use somewhat derogatory words like “geek” or “nerd”) per se. Still, my engineering background and years of experience as a functional consultant should suffice for understanding the advantages and possible perils of service oriented architecture (SOA).

On one hand, SOA’s advantages of flexibility (agility), components’ reusability and standards-based interoperability have been well publicized. On the other hand, these benefits come at a price: the difficulty of governing and managing all these mushrooming “software components without borders”, as they stem from different origins and yet are able to “talk to each other” and exchange data and process steps, while being constantly updated by their respective originators (authors, owners, etc.).

At least one good (or comforting) fact about the traditional approach to application development was that old monolithic applications would have a defined beginning and end, and there was always clear control over the source code.

Instead, a new SOA paradigm entails composite applications assembled from diverse Web services (components) that can be written in different languages, and whose source code is hardly ever accessible by the consuming parties (other services). In fact, each component exposes itself only in terms what data and processes it needs as an input and what it will return as an output, but what goes “under the hood” remains largely a “black box” or someone’s educated guess at best.

Consequently, SOA causes radical changes in the well-established borders (if not their complete blurring) of software testing, since runtime (production) issues are melding with design-time (coding) issues, and the traditional silos between developers, software architects and their quality assurance (QA) peers appear to be diminishing when it comes to Web services.

Transparency is therefore crucial to eliminate the potential chaos and complexity of SOA. Otherwise, the introduction of SOA will have simply moved the problem area from a low level (coding) to a higher level (cross-enterprise processes), without a reduction in problems. In fact, the problems should only abound in a distributed, heterogeneous multi-enterprise environment.

Then and Now

Back to the traditional practices and mindset: the software world considers design as development-centric (i.e., a “sandbox” scenario), and runtime as operation-centric (i.e., a part of a real-life customer scenario). But with SOA that distinction blurs, since Web services are being updated on an ongoing basis, thus magnifying the issues of recurring operations testing and management.

Namely, companies still have to do component-based software testing (to ascertain whether the code is behaving as expected) at the micro (individual component) level, but there is also application development at the macro (business process) level, since composite applications are, well, composed of many disparate Web services. In other words, programmers are still doing traditional development work, but now that development work becomes involved in infrastructure issues too.

For instance, what if a Web service (e.g., obtaining exchange rates, weather information, street maps information, air flight information, corporate credit rating information, transportation carrier rates, etc.), which is part of a long chain (composite application), gets significantly modified or even goes out of commission? To that end, companies should have the option of restricting the service’s possibly negative influence in the chain (process) until a signaling mechanism is in place, which can highlight changes that may compromise the ultimate composite application.

Functional testing in such environments is a challenge because, by nature, Web services are not visual like conventional, user-facing software applications. In place of a front-end or user interface (UI), some astute testing software can overlay a form that allows team members to see the underlying schema (data structure) of the Web service being tested.

Furthermore, testing SOA applications is problematic since it is not only difficult for a company to know if a particular Web service will deliver on its “contract”, but also, even if it does, whether it will maintain the company’s adopted standards of performance (e.g., under increased loads) and security while complying with its adopted regulatory policies.

Thus, modern SOA software testing tools increasingly provide support for multiple roles, whereby architects can codify policies and rules, developers check for compliance during the test cycle, and support and operations staff can check for compliance issues when problems occur. The new crop of SOA testing tools also increasingly support a range of tests, including functional and regression testing, interoperability testing, and policy conformance. Contrary to traditional software testing tools that inspect code, Web services testing tools deal with the quality of the extensible markup language (XML) messaging layer.

And although both traditional and Web services testing tools deal with syntax, for Web services team members require higher-level awareness of business rules and service policies. This is owing to the highly distributed SOA environment that makes keeping track of changes difficult and underscores the new SOA management complexity.

In fact, change management in pre- and post-application development is essential to filter out redundant changes, prioritize changes, and resolve conflicting changes. But also, if a certain message between the points A and B doesn’t pass in a real-life scenario, there has to be awareness of what needs to be done to rectify it now and in the future.

The abovementioned examples of numerous problems inherent in SOA have caused the previously mentioned silo-ed areas to now come much closer to each other. These are the following: software lifecycle management, applications performance management and information technology (IT) governance, with change management acting as a core information source on all changes in the environment. This union should enable companies to discover which Web services and components exist, who the owners are, and which services and components are actually consumed and by which applications/business processes.

Progress Software Nabs Mindreef

As to be better positioned to deliver testing and governance products that are geared towards setting up continuous testing and validation to ensure the high reliability and quality of multi-tier, composite SOA applications, Progress Software Corporation recently acquired Mindreef. It is interesting to note the quietness of the event that was reported only briefly by ZDNet bloggers Joe Kendrick and Dana Gardner.

Mindreef was a privately held firm founded in 2002 by Frank Grossman and Jim Moskun who leveraged their deep expertise in Microsoft Windows, Java, and device drivers’ debugging and testing to create the Mindreef SOAPscope products for SOA testing and validation. Mindreef was acquired by Progress Software and included in the Progress Actional product group in June 2008.

Prior to being acquired by Progress Software in early 2006, Actional Corporation was an independent leading provider of Web services management (WSM) software for visibility and run-time governance of distributed IT systems in a SOA. Actional’s SOA management products were incorporated under the product name Progress Actional within Progress’ Enterprise Infrastructure Division, and is now a major element of the Progress SOA Portfolio.

In a nutshell, Mindreef has already been wrapped into Progress Actional product group, since it addresses SOA management at the design and testing phase, while Actional primarily addresses SOA management at the production (run-time) phase (e.g., tracing transactional tables). Thus, Progress now has an expanded solution that addresses the quality and management of the full SOA lifecycle, from early concept and design thru go-live implementation, on-boarding new Web services, and overall SOA production management.

Frank Grossman, former chief executive officer (CEO) and founder of Mindreef is now vice president (VP) of Technology for Progress Actional, reporting to Dan Foody, who is in charge of Progress Actional. For more information the acquisition’s rationale, see the frequently asked questions (FAQ) page here.

Since there is so much product integration in the planning stages at this point soon after announcement of the two recent acquisitions (the other one being of Iona Technologies), Progress hopes to have new slide decks to accompany analyst briefings on virtually all of its products over the next several months. Look for follow up blog posts from me at that time.

Zooming Into SOAPscope

Designed for easy use by architects, service and support personnel as well as SOA operations managers, the Mindreef SOAPscope product family comprises SOAPscope Server, SOAPscope Architect, SOAPscope Tester, and SOAPscope Developer.

Essentially, Mindreef products collect information about Simple Object Access Protocol (SOAP) transactions and use it to shed light on Web services communications. But while most of such logging tools store data in pesky flat files, SOAPscope stores it in a relational database for ease of use even by the folks who are not necessarily XML and SOAP experts.

Mindreef SOAPscope Server was initially called Mindreef Coral, and was re-released under the current name in mid 2006. Like many software testing tools, this collaborative testing product includes a “play” button when Web services are exercised based on specific scenarios. If services for some steps of the process scenario are not available, SOAPscope Server can even simulate them.

It’s About Process (or the Ability to be Responsive) — Part IV

In addition to the examples described in Part III, another example of the ResponsAbility software in use can be found in Grayhill, Inc. an electronics manufacturer from Lagrange, Illinois (US), servicing industrial and government customers. While the company has been a long-term WebSource CPQ user for sales configuration purposes, the ResponsAbility sibling was later introduced for managing several processes, among them for product returns or return merchandize authorizations (RMAs).

Customer return requests are either imported from the company’s enterprise resource planning (ERP) system or directly entered by customers and/or Grayhill associates into ResponsAbility as a “request for material return.” Based on the entered data via a customized form, the return is authorized or denied. Namely, a default assignee reviews a request and approves it, rejects it, or asks the customer for additional clarifications.

Upon authorization, when the goods are received a case gets assigned to the quality assurance (QA) team. This is another “gate review” step in the process where the quality team determines if the failure is due to a product defect or misuse (user-induced damage). If a case is determined to be a defect, then the part is repaired at no cost or a new part is sent to a customer.

The defective part is also sent to the engineering department for analysis to determine the root cause and future corrective actions. Namely, in order to ensure the highest quality for which Grayhill is known, the case cannot be closed until all the corrective and preventive action (CAPA) requirements are fulfilled. To that end, the following outputs must be generated: the detailed explanation of the root cause of the problem, the short-term fix, the long-term fix, sent a final report to the customer, etc.

If it is not a defective part case, the case is closed and the goods are returned to the customer, who may in turn elect to convert it to a special service request case type. Logically then, another workflow process is followed, consisting of steps such as creating a service estimate, approval, service fulfillment (repair), invoicing, etc.

In other words, in case of misuse, the customer is asked to authorize a repair for a fee. If and when an approval is received, the product is repaired and the case is closed. Similar to the new feature request vs. bug software example from Part III, a repair service for fee process follows its own workflow via the repair department and QA, and then is shipped to the customer.

Ken Hoving, Grayhill’s vice president (VP) of corporate quality said

“The Webcom solution allowed us to consolidate all of our customer corrective actions in one system and enable web access across the entire organization, including our customers, resulting in cycle time improvements and increased customer satisfaction.”

Also, the company asserts that due to all the system’s nifty drag-and-drop Web 2.0 personalization capabilities for both users and administrators, the BPM tool is not something that users feel forced to use, but they truly want to use it because it helps them to do a better job. They do not have to worry about forgetting to do something or missing a step in a rush, since ResponsAbility ensures that the process is thorough and consistent each time.

Another important process that ResponsAbility enables at Grayhill is SDPR (Special Design Pricing Request).

Namely, when a prospective customer inquires about a product that Grayhill does not currently manufacture as a standard, then such a request gets routed via a number of departments, starting with sales that captures the detailed inquiry/request. Then, the engineering team will estimate the cost/time to complete the special request, while the marketing and accounting staff will analyze the economic viability of the special job (it is still expected to be some batch/series production rather than a one-off engineer-to-order [ETO] product), and create a catalog number and its price (quote).

Before that happens and the sales department can communicate back to the customer Grayhill’s interest and official price (quote), several collaborative iterations have to take place between the customer, Grayhill and its vendors (e.g., the special tooling and fixtures’ cost and lead time discussion).

Product Information Management Example

Broan-NuTone, based in Hartford, Wisconsin (US), and North America’s leading manufacturer and distributor of residential ventilation products is another combined WebSource CPQ and ResponsAbility user. Its products include range hoods, ventilation fans, heater/fan/light combination units, Indoor Air Quality (IAQ) Fresh Air Systems, built-in heaters, whole-house fans, attic ventilators, paddle fans and trash compactors.

The company has thousands of products, each with a slew of attributes such as length, width, material, standards to comply with (e.g., the UL Safety Standard, Canadian Standards Association [CSA], CE-Marking, etc.), voltage, power, air flow, and so on. The goal is to publish all that vast catalog data electronically via WebSource CPQ.

However, that cannot happen without consolidating all of the above data for all of the company’s products. ResponsAbility comes into the picture here, whereby each product will go through a special product information management (PIM) workflow.

Namely, the engineering team will have to fill in over hundred data points for each product, the marketing staff will add in their pertinent data, and product management will then have to fill the various product prices (list price, distributor price, wholesale price, etc.). Once the PIM case is closed, a prepared Microsoft Excel document with all of the required data about all the products in a product family can be imported into WebSource CPQ.

“After months of review and the evaluation of numerous vendors to help implement a Product Information Management system, we chose ResponsAbility from Webcom”, stated Mark Hughes, Internet Marketing Manager at Broan-NuTone. “Having several thousand products to manage from conception to obsolescence, we wanted to have stability out of the box. We feel that ResponsAbility is the perfect fit,” added Hughes.

Underlying ResponsAbility Technology

With some research indicating customer acquisition costing multiple times more than customer retention, ResponsAbility complements Webcom’s quote-to-order (Q2O) solution, WebSource CPQ, and continues the company’s focus on simplifying complex business processes.

“Attaining your goals and objectives requires not only a focus on obtaining new business through a quote-to-order solution such as WebSource CPQ, but just as rigorous a focus on retaining your most treasured asset, your customers”, commented Aleksandar Ivanovic, Webcom’s chief executive officer (CEO) and founder.

“ResponsAbility is just the type of solution needed to help drive customer satisfaction, innovation and repeat business”, added Ivanovic. “Especially in today’s uncertain economy, driving productivity through repeatable and reliable processes is crucial to success, and ResponsAbility could be a valuable tool helping companies improve customer service through nimbleness and implement process control.”

However, in order not to create internal competition for research and development (R&D) resources, WebSource CPQ and ResponsAbility, although both being offered on-demand, have intentionally been developed on two different technologies, Microsoft .NET Framework and Java 2 Enterprise Edition (J2EE), respectively. For more information, see TEC’s earlier article entitled Understand J2EE and .NET Environments Before You Choose.

Some best-practices sharing between the two teams could still be possible on the user interface (UI) side, since both products leverage Asynchronous Java and XML (AJAX) for rich client enablement and Web 2.0 gadgets. Although the two products are currently English-only, a common translation mechanism for other languages is being developed. Both products will be able to leverage these schemas for deployments in several languages. However, the decision on which languages to tackle first and deliver has yet to be made.

But, in contrast to WebSource CPQ, ResponsAbility is enabled for the Hibernate database-independent object/relational persistence and query service. The product features full audit trail and archiving capabilities, and the ability to export data in the CSV (comma separated values), Microsoft Excel, extensible markup language (XML), Adobe PDF (portable data file), and RTF (rich text file) file formats.

KISS IT or Leave IT

Webcom’s main challenge with the new workflow/BPM product will be to balance its “keep it straight and simple (KISS)” mantra with the complexity of full-fledged BPM applications’ deployments. On the one hand, the vendor positions ResponsAbility as a “lite BPM” product, given that it features much more capabilities than a mere workflow product, but on the other hand, it is far more limited than any other notable BPM suite’s functional footprint at this stage.

To be fair, some BPM functional requirements can be rendered moot in the on-demand model. In fact, product versioning, acceptance testing and/or whether workflow notification mechanisms can integrate with desktop products or interact via email are all capabilities that are a “big deal” for client/server on-premise BPM deployments, but are virtually irrelevant in software as a service (SaaS) subscription-based deployments.

The same goes for integration with third-party integrated development environments (IDE’s) due to the web-based workflow modeling environment within ResponsAbility. Indeed, IDEs like Microsoft Visual Studio are relevant for on-premise programming development, i.e., for writing source code, compiling it and making it executable code. In contrast to that, workflow modeling within ResponsAbility does not require coding, compiling, server deployment, etc. Furthermore, the SaaS deployment model completely obviates the need to buy and install an IDE.

It might be interesting to note here that Salesforce.com, when it started several years ago (and likely even still today) only had a fraction of customer relationship management (CRM) functionality that Oracle Siebel has had (and still has today). Still, this functional deficiency did not stop the on-demand CRM pioneer from succeeding.

The goal is not necessarily to out-feature other software packages, since most of them already have so much functionality that much thereof is never implemented or used (as can be seen in TEC’s article entitled Application Erosion: Eating Away at Your Hard Earned Value).

Thus, Webcom’s main goal is to make ResponsAbility so easy to set up and so easy to use that there will never be a failed implementation or a disgruntled customer. The goal is to quickly and simply help people to get their respective jobs done in a way that they get almost addicted to the tool, so much so that they cannot even imagine doing it any other way.

For what is worth, getting back to the “eating own dog food” mantra from Part III, Webcom’s staff admits to being addicted to ResponsAbility. If they look at their own statistics, which are available in the application, each Webcom employee will have personally performed thousands of transactions therein.

In the next product release, due in the fall of 2008 (which is another advantage of the SaaS development, i.e., the frequency of new releases), Webcom will be adding several new features, such as visual workflow/process designer, rules and conditions, escalations, service level agreement (SLA) tiers, field dependencies, scheduled events, analytics (graphs, charts, trends), etc. Features like Web Services application programming interface (API), support for personal digital assistant (PDA) and other mobile devices, case and task interdependencies, etc. might come in future product releases.

While the vendor strongly believes that ease-of-use and ease-of-setup are far more important than a long list of out-of-the-box supported features, it is necessary to have some of those in the request for information (RFI)/request for proposal (RFP) phase of any selection project to avoid outright elimination.