Custom Search
The cause of poor customer service ratings, ineffective marketing initiatives, faulty financial planning, and the increase in fraudulent activity can, in many cases, relate back to an organization's management of its data. As the data collected and stored in organizations has grown exponentially over the past few years, its proper management has become critical to the successful implementation of such business initiatives as product marketing and corporate planning. Additionally, as fraud and acts of terror receive greater attention, it has become essential to use data to identify people and their relationships with one another.

This article will define master data management (MDM) and explain how customer data integration (CDI) fits within MDM's framework. Additionally, this article will provide an understanding of how MDM and CDI differ from entity analytics, outline their practical uses, and discuss how organizations can leverage their benefits. Various applications of entity analytics, including examples of its application to different types of organizations, will be highlighted along with the benefits it offers organizations in such service industries as government, security, banking, and insurance.

Data Management—Its Broad Spectrum

MDM has emerged to provide organizations with the tools to manage data and data definitions effectively throughout an organization in order to present a consistent view of the organization's data. In essence, MDM overcomes the silos of data created by different departments and provides an operational view of the information so that it may be leveraged by the entire organization. It focuses on the identification and management of reference data across the organization to create one consistent view of data. MDM's application identifies how different subsets of MDM address separate aspects of an organization's needs.

MDM manifests its importance when a customer service representative (CSR) cannot access customer information due to inconsistencies introduced by a corporate acquisition or a new system implementation, which may lead to the frustration (or even alienation) of the customer. Add to this the extra time the CSR spends accessing the appropriate data, and the issue extends to wasted time and money. MDM focuses on the identification and management of reference data across the organization to create one consistent view of data.

CDI is a subset of MDM, and serves to consolidate the many views of a customer within the organization into one centralized structure. This data consolidation provides the CSR with the information required or the ability to link to the required information, which may include billing, accounts receivable, etc. Once the data is consolidated, references to each customer file are created that link to one another and assign the "best" record from the available information. Consequently, data inconsistencies that occur across disparate systems, such as multiple address formats, are cleansed based on defined business rules to create one version of customer data that will be viewed across multiple departments within the organization.

The creation of "one version of the truth" presents unique challenges to organizations In many organizations, there are multiple views of the customer, such as accounts payable, call center, shipping, etc. Each profile may have the same customer name, but different addresses or other associated information such as unique customer numbers for each department, making it difficult to link one person to multiple processes. The difficulty comes when determining which view is the most correct. For example, if four versions of the same customer name and associated address exist, one version should be chosen from the four files to represent the most correct view in order to create a consolidated profile of that customer. The issue that arises here is that each department may have a different definition of "customer," making reconciliation of customer data an enormous task. For instance, organizations often profile their customers differently in systems across the organization, giving employees an incomplete view of the customer. The resolution of this issue allows the redundant or inaccurate customer records to be purged.

Aside from incomplete records, as the customer information is entered into the system multiple times, more silos are created, amplifying the problem. In addition to CSRs and employees having direct contact with the customer, marketing is another department that may have a different or incomplete view of the customer. This can translate into ineffective marketing campaigns and missed revenue opportunities. Although this last example may seem farfetched, the reality is that poor management of data within an organization affects the bottom line. CDI, when implemented properly, can not only reduce costs, but also increase sales, customer service ratings, and customer loyalty.
As data becomes more complex, management strategies have been applied differently and used more widely to address not only organizational needs, but those concerning fraud detection and security. IBM's Entity Analytics Solution (EAS) addresses the needs of such organizations as government agencies and financial and insurance institutions to combat fraud and terrorism by applying data management techniques in a different way than CDI. Essentially, the concept surrounding the EAS platform translates into "the more data collected, the better". Instead of discarding extra information, as CDI does, the opposite direction is taken by aggregating, grouping, and resolving identity information attributes to use new, old, accurate, inaccurate, and seemingly attributes. This helps with the development of pattern recognition. For example, if a person collects more than one social security check using two or more separate addresses, EAS will identify the fact that a particular individual collects multiple checks sent to various addresses, and will create an alert in the system.

The ability to link individuals to multiple data sets and determine their interconnectivity helps proactive identification of potential fraudulent or criminal activity. IBM, with its acquisition of Language Analysis Systems (LAS), has started to address these needs through IBM Global Name Recognition. Instead of taking a business intelligence data integration or customer relationship management (CRM) customer data integration approach (whereby data cleansing activities take place to create one version of the truth), Entity Analytics uses the opposite approach to identify recurring data patterns to address terrorism and fraud through its Terrorism and Fraud Intelligence Framework (T&FI). The software addresses the issues of searching and managing data on individuals across geographic regions, customers within financial institutions, etc. to meet the demands of managing data sets from diverse cultures and geographic regions. This goes beyond name recognition to analyze how names are interconnected through the identification of recurring data patterns and entity connections. These connections are flagged based on rules created to identify suspicious transactions or behavior.

IBM Entity Analytics Software Offering

IBM Entity Analytics Solutions Global Name Recognition provides four modules (see figure 1 below) to enable organizations to identify people, relationships, and data patterns, and to share that information anonymously to identify potential fraudulent or suspicious behavior. IBM's EAS consists of

* IBM Identity Resolution, which identifies an individual entity and connects the data associated to that individual across data silos;
* IBM Relationship Resolution, which identifies non-obvious relationships to reveal social, professional, or criminal networks. This module also provides instant alerts once data connections are detected;
* IBM Anonymous Resolution, which de-identifies sensitive data sets using proprietary preprocessing and one-way hashing to add additional layers of privacy, and link that data based on codes that enable entity relationship identification without jeopardizing individual privacy laws. Data is shared anonymously and remains with the data owner to ensure data security;
* IBM Name Resolution solution includes name searching, variation generator, parser, culture classifier, and defining genders. Global Name Recognition's primary use is to recognize customers, citizens, and criminals across multiple cultural variations of name data. A practical application of the name variation generator is to learn the different spellings of names across various geographical regions.



Figure 1. EAS's Identity & Relationship Recognition Platform, IBM 2005

Government Use

Governments are obligated to spend taxpayers' dollars prudently in addition to protecting the public trust. This includes ensuring that they provide proper payments, services, and benefits to all social services recipients. Improper payments represent 10 percent or more of the total payout in social benefits. The US government issues over $6.6 billion (USD) in improper payments annually. The identification of relationships and data patterns and their associated entities can identify these data anomalies before fraudulent payments are issued, allowing money to be accurately channeled to the correct recipients.

In the aftermath of hurricane Katrina, the US federal government distributed $1.2 billion (USD) in aid to individuals who submitted fraudulent claims, either by using the same name at multiple addresses, or by using multiple names at the same address. This is an example that highlights the benefits of entity analytics over CDI in the detection of fraud. Where CDI attempts to reconcile the data into one correct version, EAS tries to spot the multiple records and creates a flag to identify the discrepancies. Solutions such as EAS identify this type of activity beforehand, thus reducing the possibility of fraudulent claims.

National security and terrorism prevention are major priorities for many countries. Identification of terrorists and individuals associated with known terrorists is crucial to safeguarding national security and to developing a list of potential security threats. For instance, the United States is currently using name recognition technology in the war on terrorism. The US Homeland Security agency used EAS to analyze Iraqi data sources in an effort to leverage data to help identify and gather relationship information during interrogations. Consequently, approximately 2,000 relationships of interest between intelligence agency personnel, service personnel, criminals, detainees, kin, tribal leaders, tribal members, and those interrogated were discovered. The detection of these relationships assisted in identifying and capturing potential terrorists before they committed acts of terror as well as in developing strategies based on potential areas of threat.

Additionally, governments are using EAS at an international level to help prevent terrorists and potential criminals from entering or exiting a country. An individual's identity, related to the way he or she spells a surname, can be different across multiple geographic regions. Ordinarily, data inconsistencies of this nature may present one individual as multiple individuals based on the recorded inconsistencies within the different systems. With an EAS solution in place, the systems can link and match these data sets to find consistent elements, and link them to create a complete individual record, thereby turning multiple fictitious people into one entity. Additionally, IBM Anonymous Resolution, coupled with anonymous identification, helps protect individual privacy and adheres to international privacy laws.

Financial and Insurance Industry Applications

In both the banking and insurance industries, the need to identify and to track data patterns and entity relationships has become essential to detect potential fraud and money laundering activities. One example of such activities is the submission of forged mortgage applications marked as approved. Bank employees have used this technique to pocket millions of dollars by creating fake customers, changing small amounts of application data on approved forms, and pocketing the money. With the ability to match "like" forms by collecting and storing every piece of information, financial institutions can raise flags based on recurring data patterns, thereby decreasing the potential for fraud.

Balancing Trade-offs Between Security and Privacy

As analytics software becomes more entrenched in general use, questions arise as to whether its benefits to identify criminals and terrorists outweigh its potential to infringe on personal privacy. Governments must strike a balance between effective management and analysis of information assets to recognize and preempt potential threats while ensuring the preservation and protection of citizen personal privacy and civil rights. Citizens must also be confident that information under the care of the organizations entrusted to protect them is not re-tasked or re-purposed for missions beyond the scope of the mission for which it was gathered.

Data management represents an effective approach to strike this balance. Responsibly managed information analysis enables national security compliance through effective and accurate watch list reference, intuitive filtering, and know your customer (KYC) controls as designated by such regulatory guidelines as the US Patriot Act and Bank Secrecy Act, and the international Basel Accords. This is done while providing a centralized source for managing personally identifiable information (PII) security, collective notification, opt out, and access controls resident in almost all privacy and regulatory requirements. By granting the government access to that data relating to known terrorists only, the balance of the citizen data is not shared with the government. Thus, EAS software accomplishes this in a manner consistent with international and domestic privacy laws.

The ability to identify people, track their movements, and uncover interconnections in their relationships and social associations is imperative to help preempt potential security threats. In the financial and insurance industries, using these tools can reduce fraud and create an environment of proactive fraud detection. Although there remains the issue of personal security and the question as to whether the government has the right to capture so much information about so many people, the benefits of identifying and matching individuals based on their associations have proven advantageous in the detection and prevention of potential fraud and terrorist activity. Furthermore, financial, insurance, and security organizations may derive immediate benefits from such entity analytics software as IBM's Entity Analytics by proactively thwarting fraudulent and criminal activities, and saving time, money, and lives in the process.

Supply chain execution, planning, and optimization systems have been available in the market for a couple of years. These systems have been implemented across various industry sectors and organizations, with varied degrees of success. In many supply chain projects, one of the most important issues is the reporting, monitoring, and performance tracking mechanism, which generally remains unaddressed. Supply chain projects usually bring about a complete change in the way an organization plans, optimizes, and executes. Hence, these projects take a long time to stabilize.

Typically, as soon as the new supply chain system goes live, an organization is eager to have the basic transactions—production planning, materials planning, and procurement—up and running, and it rushes to put these in place. The organization focuses on the transaction reports, and supply chain optimization, monitoring, and performance management all take a “back seat” (are placed second in importance).

Most organizations have been operating with a supply chain system for a fair amount of time, and they have advanced along the learning curve. That is, they have reached a certain level of understanding of supply chain management (SCM). Now their focus is moving toward the next level of complexities: supply chain monitoring and performance management.

Following are some of the more important goals organizations are trying to reach, the functionalities they require to achieve these goals, and current trends in the area of supply chain reporting, monitoring, and performance management.

1. Corporate strategy to operations strategy
Organizations are trying to create a framework for top management to link corporate strategy and operations strategy. So once senior-level executives (CXOs) decide on on a corporate strategy’s objectives, they want to break it down at every level of the organization’s hierarchy. At each of these levels, the key performance indicators (KPIs) are designed to align to the top-level key performance index.

For example, the chief operations officer (COO) may be targeting a 5 percent reduction in costs for the current financial year. To meet this objective, the COO may plan to reduce SCM costs by 4 percent. SCM costs may be the responsibility of the vice president of supply chain. SCM costs, in turn, may be broken down further into lower-level KPIs, which might be the responsibility of lower-level managers. Thus, the operations strategy is modeled to align with the top-level corporate strategy.

2. Dashboards and scorecards
Organizations are looking forward to visually intuitive dashboards that display the current status of KPIs vis-a-vis their targets. Color codes are an added advantage in that they denote the status of various KPIs. Organizations require a flexible tool that will allow them to model the scorecard by giving different weights to various KPIs.

3. Target setting
The supply chain analytics system should be able to capture the target values of KPIs, and should provide the functionality to direct to the bottom level the target that is decided at the top level of the organization’s hierarchy, based on some predefined logic.

4. Benchmarking
Benchmarking is an important feature for organizations that want to measure their performance against industry standards. Thus, the system should be able to capture data from various publications that publish KPI benchmark figures for various industries.

5. Predefined KPI models
Organizations anticipate the benefits of out-of-the-box, predefined KPIs that are based on popular industry-standard supply chain models, such as the supply chain operations reference (SCOR) model. They believe that the out-of-the-box content will help save on implementation time. The system should be flexible to support any changes that an organization might want to make to the KPI hierarchy or the KPI formula.

6. Flexible reporting structure
The system should have various predefined reports. The reporting structure should be flexible so that users can customize reports by adding or removing columns to meet their requirements.

7. Drill-down feature to perform root cause analysis
The drill-down feature is essential, as this allows users to navigate through the various levels of the KPI hierarchy. This will enable users to perform a root cause analysis of any supply chain problem, thus saving valuable time when diagnosing and correcting problems. The drill-down functionality should also be available for various dimensions, such as product, product group, customer, customer group, company, region, etc.

8. Role-based access
The system should support role-based access to data, KPIs, and reports, which is required to maintain data confidentiality and data integrity. Role-based accessibility should also be supported at the various levels of the dimensions. For example, a user may have access to the delivery schedule report only for certain product categories and certain regions.

9. Simulation and what-if analysis
Simulation is a very important feature, as executives can simulate various scenarios and perform what-if analysis. They can see how the KPIs perform under different conditions. For example, the vice president of supply chain may change the value of the production schedule adherence KPI to see how it affects inventory levels. This will help to fix the KPI target at an optimal level.

10. Predictive modeling
Predictive modeling will help the system build relationships between various KPIs, based on the past transitional data. For example, the system should be able to find the correlation between forecast accuracy KPI data and the finished goods inventory data, based on past information. The system needs a predictive engine to perform such analysis. Once relationships between KPIs have been established, the system can store them, which will help users to perform root cause analyses.

11. Alerts
The system should be able to generate alerts and send notifications, through e-mail or other means, about changes or problems, thus allowing the person responsible to take timely corrective actions.

12. Data flow
A tool this powerful and flexible must be built on a data warehousing framework, which inherently supports some of the features discussed above. Data can be uploaded into the data warehouse (DW) through various means. Direct links can be established between the supply chain analytics system and the transactional and planning systems (that is, enterprise resource planning [ERP] and supply chain planning [SCP] and optimization systems), or data can be uploaded into the DW through flat files. (See Figure 1.)

Figure 1. The flow of information to the supply chain analytics system from other key enterprise systems.

Based on my experiences working on supply chain projects with a variety of companies, Table 1 shows how user companies rate the supply chain analytics system features discussed above.

Functionalities Essential Functions Nice-to-have Functions
Ability to link corporate strategy to operations strategy Y
Dashboards and scorecards Y
Target setting Y
Benchmarking Y
Predefined KPI models Y
Flexible reporting structure Y
Drill-down feature to perform root cause analysis Y
Role-based access Y
Simulation and what-if analysis Y
Predictive modeling Y
Alerts Y
Data flow Y

Table 1. Company ratings of supply chain analytics systems’ features.

Supply chain analytics is becoming popular, as organizations are finding traditional reporting structures incapable of handling the increasing complexities of their supply chain. In addition, supply chain analytics allows organizations to make the necessary alignment between top-level KPIs and lower-level KPIs, which helps the organization work toward a uniform goal and vision.

In essence, supply chain analytics is a tool that will increase the speed of decision making, which will help to increase the supply chain’s flexibility and adaptability, and help organizations cope with the uncertainties of the operating environment.

The market is witnessing an unprecedented shift in business intelligence (BI), largely because of technological innovation and increasing business needs. The latest shift in the BI market is the move from traditional analytics to predictive analytics. Although predictive analytics belongs to the BI family, it is emerging as a distinct new software sector.

Analytical tools enable greater transparency, and can find and analyze past and present trends, as well as the hidden nature of data. However, past and present insight and trend information are not enough to be competitive in business. Business organizations need to know more about the future, and in particular, about future trends, patterns, and customer behavior in order to understand the market better. To meet this demand, many BI vendors developed predictive analytics to forecast future trends in customer behavior, buying patterns, and who is coming into and leaving the market and why.

Traditional analytical tools claim to have a real 360� view of the enterprise or business, but they analyze only historical data—data about what has already happened. Traditional analytics help gain insight for what was right and what went wrong in decision-making. Today's tools merely provide rear view analysis. However, one cannot change the past, but one can prepare better for the future and decision makers want to see the predictable future, control it, and take actions today to attain tomorrow's goals.

What is Predictive Analytics?

Predictive analytics are used to determine the probable future outcome of an event or the likelihood of a situation occurring. It is the branch of data mining concerned with the prediction of future probabilities and trends. Predictive analytics is used to automatically analyze large amounts of data with different variables; it includes clustering, decision trees, market basket analysis, regression modeling, neural nets, genetic algorithms, text mining, hypothesis testing, decision analytics, and more.

The core element of predictive analytics is the predictor, a variable that can be measured for an individual or entity to predict future behavior. For example, a credit card company could consider age, income, credit history, other demographics as predictors when issuing a credit card to determine an applicant's risk factor.

Multiple predictors are combined into a predictive model, which, when subjected to analysis, can be used to forecast future probabilities with an acceptable level of reliability. In predictive modeling, data is collected, a statistical model is formulated, predictions are made, and the model is validated (or revised) as additional data become available.

Predictive analytics combine business knowledge and statistical analytical techniques to apply with business data to achieve insights. These insights help organizations understand how people behave as customers, buyers, sellers, distributors, etc.

Multiple related predictive models can produce good insights to make strategic company decisions, like where to explore new markets, acquisitions, and retentions; find up-selling and cross-selling opportunities; and discovering areas that can improve security and fraud detection. Predictive analytics indicates not only what to do, but also how and when to do it, and to explain what-if scenarios.
Predictive analytics employs both a microscopic and telescopic view of data allowing organizations to see and analyze the minute details of a business, and to peer into the future. Traditional BI tools cannot accomplish this functionality. Traditional BI tools work with the assumptions one creates, and then will find if the statistical patterns match those assumptions. Predictive analytics go beyond those assumptions to discover previously unknown data; it then looks for patterns and associations anywhere and everywhere between seemingly disparate information.

Let's use the example of a credit card company operating a customer loyalty program to describe the application of predictive analytics. Credit card companies try to retain their existing customers through loyalty programs. The challenge is predicting the loss of customer. In an ideal world, a company can look into the future and take appropriate action before customers switch to competitor companies. In this case, one can build a predictive model employing three predictors: frequency of use, personal financial situations, and lower annual percentage rate (APR) offered by competitors. The combination of these predictors creates a predictive model, which works to find patterns and associations.

This predictive model can be applied to customers who are start using their cards less frequently. Predictive analytics would classify these less frequent users differently than the regular users. It would then find the pattern of card usage for this group and predict a probable outcome. The predictive model could identify patterns between card usage; changes in one's personal financial situation; and the lower APR offered by competitors. In this situation, the predictive analytics model can help the company to identify who are those unsatisfied customers. As a result, company's can respond in a timely manner to keep those clients loyal by offering them attractive promotional services to sway them away from switching to a competitor. Predictive analytics could also help organizations, such as government agencies, banks, immigration departments, video clubs etc., achieve their business aims by using internal and external data.

On-line books and music stores also take advantage of predictive analytics. Many sites provide additional consumer information based on the type of book one purchased. These additional details are generated by predictive analytics to potentially up-sell customers to other related products and services.

Predictive Analytics and Data Mining

The future of data mining lies in predictive analytics. However, the terms data mining and data extraction are often confused with each other in the market. Data mining is more than data extraction It is the extraction of hidden predictive information from large databases or data warehouses. Data mining, also known as knowledge-discovery in databases, is the practice of automatically searching large stores of data for patterns. To do this, data mining uses computational techniques from statistics and pattern recognition. On the other hand, data extraction is the process of pulling data from one data source and loading them into a targeted database; for example, it pulls data from source or legacy system and loading data into standard database or data warehouse. Thus the critical difference between the two is data mining looks for patterns in data.

A predictive analytical model is built by data mining tools and techniques. Data mining tools extract data by accessing massive databases and then they process the data with advance algorithms to find hidden patterns and predictive information. Though there is an obvious connection between statistics and data mining, because methodologies used in data mining have originated in fields other than statistics.

Data mining sits at the common borders of several domains, including data base management, artificial intelligence, machine learning, pattern recognition, and data visualization. Common data mining techniques include artificial neural networks, decision trees, genetic algorithms, nearest neighbor method, and rule induction.
Some vendors have been in the predictive analytical tools sector for decades; others have recently emerged. This section will briefly discuss the capabilities of key vendors in predictive analytics.

SAS

SAS is one of the leaders in predictive analytics. Though it is a latecomer to BI, SAS started making tools for statistical analysis at least thirty years prior, which has helped it to move into data mining and create predictive analytic tools. Its application, SAS Enterprise Miner, streamlines the entire data mining process from data access to model deployment by supporting all necessary tasks within a single, integrated solution. Delivered as a distributed client-server system, it is well suited for data mining in large organizations. SAS provides financial, forecasting, and statistical analysis tools critical for problem-solving and competitive agility.

SAS is geared towards power users, and is difficult to learn. Additionally, in terms of real-time analytics, building dashboards and scorecards, SAS is a laggard compared to competitors like Cognos, Business Objects, and Hyperion; however, its niche product in data mining and predictive analytics has made it stand out of the crowd.

SPSS

SPSS Inc. is another leader in providing predictive analytics software and solutions. Founded in 1968, SPSS has a long history of creating programs for statistical analysis in social sciences. SPSS today is known more as a predictive analytics software developer than statistical analysis software.

SPSS has played a thought-leadership role in the emergence of predictive analytics, showcasing predictive analytics as an important, distinct segment within the broader business intelligence software sector. SPSS performs almost all general statistical analyses (regression, logistic regression, survival analysis, analysis of variance, factor analysis, and multivariate analysis) and now has a full set of data mining and predictive analytical tools.

Though the program comes in modules, it is necessary to have the SPSS Base System in order to fully benefit from the product. SPSS focuses on ease; thus beginners enjoy it, while power users may quickly outgrow it. SPSS is strong in the area of graphics, and weak in more cutting edge statistical procedures and lacks robust methods and survey methods. The latest SPSS 14.0 release has improved links to third-party data sources and programming languages.

Insightful

Along similar lines is Insightful Corporation, a supplier of software and services for statistical data analysis, data mining of numeric, and text data. It delivers software and solutions for predictive analytics and provides enterprises with scalable data analysis solutions that drive better decisions by revealing patterns, trends, and relationships. Insightful's S-PLUS 7, is a standard software platform for statistical data analysis and predictive analytics. Designed with an open architecture and flexible interfaces, S-PLUS 7 is an ideal platform for integrating advanced statistical techniques into existing business processes.

Another tool offered by Insightful is Insightful Miner, a data mining tool. Its ability to scale to large data sets in an accessible manner in one of its strengths. Insightful Miner is also a good tool for data import/export, data exploration, and data cleansing tasks, and its reduces dimensionality prior to modeling. While it has powerful reporting and modeling capabilities, it has relatively low levels of automation

StatSoft Inc.

StatSoft, Inc. is a global provider of analytic software. Its flagship product is Statistica, a suite of analytics software products. Statistica provides comprehensive array of data analysis, data management, data visualization and data mining procedures. Its features include the wide selection of predictive modeling, clustering, classification and exploratory techniques made available in one software platform. Because of its open architecture, it is highly customizable and can be tailored to meet very specific and demanding analysis requirements. Statistica has a relatively easy to use graphical programming user interface, and provides tools for all common data mining tasks; however, its charts are not easily available for the evaluation of neural net models. Statistica Data Miner another solution that offers a collection comprehensive data mining solutions. It is one of two suites that provides a support vector machine (SVM), which provides the framework for modeling learning algorithms.

Knowledge Extractions Engines (KXEN)

Knnowledge Extraction Engines (KXEN) is the other vendor that provides a suite that includes SVM. KXEN is a global provider of business analytics software. Its self-named tool, KXEN provides (SVM) and merges the fields of machine learning and statistics.
KXEN Analytic Framework is a suite of predictive and descriptive modeling engines that create analytic models. It places the latest data mining technology within reach of business decision makers and data mining professionals. The key components of KXEN are robust regression, smart segmenter, time series, association rules, support vector machine, consistent coder, sequence coder, model export, and event log.

One can embed the KXEN data mining tool into existing enterprise applications and business processes. No advanced technical knowledge is required to create and deploy models and KXEN is highly accurate data mining tool and it is almost fully automatic. However, one record must be submitted for every entity that must be modeled, and this record must contain a clean data set.

Unica

Affinium Model is Unica's data mining tool. It is used for response modeling to understand and anticipate customer behavior. Unica is enterprise marketing management (EMM) software vendor and Affinium Model is a core component of the market-leading Affinium EMM software suite.

The software empowers marketing professionals to recognize and predict customer behaviors and preferences—and use that information to develop relevant, profitable, and customer-focused marketing strategies and interactions. The automatic operation of the modeling engine shields the user from many data mining operations that must be manually performed by users of other packages, including a choice of algorithms.

Affinium is an easy to use response modeling product on the market and is suitable for the non-data miner or statistician, who lacks statistical and graphical knowledge. New variables can be derived in the spreadsheet with a rich set of macro functions; however, the solution lacks data exploration tools and data preparation functions.

Angoss Software Corporation

Another leading provider of data mining and predictive analytics tools is Angoss Software Corporation.

Its products provide information on customer behavior and marketing initiatives to help in the development of business strategies. Main products include KnowledgeSTUDIO and KnowledgeSEEKER, which are data mining and predictive analytics tools. The company also offers customized training to its clients, who are primarily in the financial services industry.

Angoss developed industry specific predictive analytics software like Angoss Expands FundGuard, Angoss Telecom Marketing Analytics, and Angoss Claims & Payments Analytics. Apart from financial industry Angoss software is used by telecom, life sciences, and retail organizations.

Fair Isaac Corporation

Along similar lines, Fair Isaac Corporation is the leading provider of credit scoring systems. The firm offers statistics-based predictive tools for the consumer credit industry. Model Builder 2.1 addresses predictive analytics, and is an advanced modeling platform specifically designed to jump-start the predictive modeling process, enabling rapid development, and deployment of predictive models into enterprise-class decision applications. Fair Isaac's analytic and decision-management products and services are used around the world, and include applicant scoring for insurers, and financial risk and database management products for financial concerns.

IBM

Not to be left out, the world's largest information and technology company, IBM also offers predictive analytics tools. DB2 Intelligent Miner for Data is a predictive analytical tool and can be used to gain new business insights and to harvest valuable business intelligence from enterprise data. Intelligent Miner for Data mines high-volume transaction data generated by point-of-sale, automatic transfer machine (ATM), credit card, call center, or e-commerce activities. It better equips an organization to make insightful decisions, whether the problem is how to develop more precisely targeted marketing campaigns, reduce customer attrition, or increase revenue generated by Internet shopping.

The Intelligent Miner Scoring is built as an extension to the DB2 tool and works directly from the relational database. It accelerates the data mining process, resulting in the ability to make quicker decisions from a host of culled data. Additionally, because D2B Intelligent Miner Scoring is compatible with Oracle databases, companies no longer have to wait for Oracle to incorporate business intelligence capabilities into their database product.

Interelate, a customer intelligence ASP, bundles applications and industry expertise in data analysis, database marketing, analytics, and CRM to deliver an analytical CRM service to its customers. Analytical CRM functions include data mining, campaign management, personalization, and click stream analysis. Interelate provides its clients with customer analytics, proprietary and 3rd party data models and scoring, data mining, campaign management, personalization, and real-time recommendation capabilities. This is done by pulling data from its client's customer and operational data systems and processing it through Interelate's analytics platform.

The offering consists of two components. The first is hosted applications from vendors such as E.piphany and Net Perceptions. The second is expert data cleansing and analysis so the data processed by the applications is actionable for the client. Interelate recently announced packaged services in three industry verticals for its Global 2000 clients:

* E-Commerce - A service designed to enhance customer acquisition and loyalty by attracting site traffic, segmenting registered users by value, and converting high-value segments into customers.

* Financial Services - A service designed to increase cross-selling by segmenting a client's existing customer base depending on a client's likelihood to purchase additional financial services.

* Travel and Leisure - A service designed to improve both customer acquisition and cross-selling by profiling the behavioral predisposition of existing customers and new prospects.

Interelate reports average implementation time ranges from 2 to 3 months depending on the nature of the client's existing IT infrastructure and the type of services requested. Pricing depends on the range of services provided as well as the amount and frequency of data analysis. Interelate's focus is to compete primarily on quality of service rather than price.
Interelate's business model is highly focused on delivering analytical CRM services to Global 2000 companies in the three vertical markets. This is a different approach from that of many ASPs such as Corio and USInternetorking, who deliver many applications to a wide range of organizations with implementation expertise, but limited expertise in providing best practices on using the applications after they are installed. Interelate's key differentiator is that they do have the expertise to provide best practices for using software packages such as E.piphany and Net Perceptions.

Although TEC is unaware of any direct competition, a number of CRM vendors supply analytical CRM applications. Companies such as BroadVision, Broadbase, SAS, Quadstone, and WebTrends provide one or more of these functions. Finding an ASP that hosts these products with value-added services is difficult.

Some of Interelate's achievements validate their business model. Goldman Sachs is both a customer and a financier. Additional financing comes from firms such as Deutsche Banc and Dell Computer. Other high profile customers include McKinsey & Company, Nissan Corporation, and the US Department of Defense. Interelate has also attracted 250 employees since its inception in July 1999.

The market is witnessing an unprecedented shift in business intelligence (BI), largely because of technological innovation and increasing business needs. The latest shift in the BI market is the move from traditional analytics to predictive analytics. Although predictive analytics belongs to the BI family, it is emerging as a distinct new software sector.

Analytical tools enable greater transparency, and can find and analyze past and present trends, as well as the hidden nature of data. However, past and present insight and trend information are not enough to be competitive in business. Business organizations need to know more about the future, and in particular, about future trends, patterns, and customer behavior in order to understand the market better. To meet this demand, many BI vendors developed predictive analytics to forecast future trends in customer behavior, buying patterns, and who is coming into and leaving the market and why.

Traditional analytical tools claim to have a real 360 view of the enterprise or business, but they analyze only historical data—data about what has already happened. Traditional analytics help gain insight for what was right and what went wrong in decision-making. Today's tools merely provide rear view analysis. However, one cannot change the past, but one can prepare better for the future and decision makers want to see the predictable future, control it, and take actions today to attain tomorrow's goals.

What is Predictive Analytics?

Predictive analytics are used to determine the probable future outcome of an event or the likelihood of a situation occurring. It is the branch of data mining concerned with the prediction of future probabilities and trends. Predictive analytics is used to automatically analyze large amounts of data with different variables; it includes clustering, decision trees, market basket analysis, regression modeling, neural nets, genetic algorithms, text mining, hypothesis testing, decision analytics, and more.

The core element of predictive analytics is the predictor, a variable that can be measured for an individual or entity to predict future behavior. For example, a credit card company could consider age, income, credit history, other demographics as predictors when issuing a credit card to determine an applicant's risk factor.

Multiple predictors are combined into a predictive model, which, when subjected to analysis, can be used to forecast future probabilities with an acceptable level of reliability. In predictive modeling, data is collected, a statistical model is formulated, predictions are made, and the model is validated (or revised) as additional data become available.

Predictive analytics combine business knowledge and statistical analytical techniques to apply with business data to achieve insights. These insights help organizations understand how people behave as customers, buyers, sellers, distributors, etc.

Multiple related predictive models can produce good insights to make strategic company decisions, like where to explore new markets, acquisitions, and retentions; find up-selling and cross-selling opportunities; and discovering areas that can improve security and fraud detection. Predictive analytics indicates not only what to do, but also how and when to do it, and to explain what-if scenarios.
Predictive analytics employs both a microscopic and telescopic view of data allowing organizations to see and analyze the minute details of a business, and to peer into the future. Traditional BI tools cannot accomplish this functionality. Traditional BI tools work with the assumptions one creates, and then will find if the statistical patterns match those assumptions. Predictive analytics go beyond those assumptions to discover previously unknown data; it then looks for patterns and associations anywhere and everywhere between seemingly disparate information.

Let's use the example of a credit card company operating a customer loyalty program to describe the application of predictive analytics. Credit card companies try to retain their existing customers through loyalty programs. The challenge is predicting the loss of customer. In an ideal world, a company can look into the future and take appropriate action before customers switch to competitor companies. In this case, one can build a predictive model employing three predictors: frequency of use, personal financial situations, and lower annual percentage rate (APR) offered by competitors. The combination of these predictors creates a predictive model, which works to find patterns and associations.

This predictive model can be applied to customers who are start using their cards less frequently. Predictive analytics would classify these less frequent users differently than the regular users. It would then find the pattern of card usage for this group and predict a probable outcome. The predictive model could identify patterns between card usage; changes in one's personal financial situation; and the lower APR offered by competitors. In this situation, the predictive analytics model can help the company to identify who are those unsatisfied customers. As a result, company's can respond in a timely manner to keep those clients loyal by offering them attractive promotional services to sway them away from switching to a competitor. Predictive analytics could also help organizations, such as government agencies, banks, immigration departments, video clubs etc., achieve their business aims by using internal and external data.

On-line books and music stores also take advantage of predictive analytics. Many sites provide additional consumer information based on the type of book one purchased. These additional details are generated by predictive analytics to potentially up-sell customers to other related products and services.

Predictive Analytics and Data Mining

The future of data mining lies in predictive analytics. However, the terms data mining and data extraction are often confused with each other in the market. Data mining is more than data extraction It is the extraction of hidden predictive information from large databases or data warehouses. Data mining, also known as knowledge-discovery in databases, is the practice of automatically searching large stores of data for patterns. To do this, data mining uses computational techniques from statistics and pattern recognition. On the other hand, data extraction is the process of pulling data from one data source and loading them into a targeted database; for example, it pulls data from source or legacy system and loading data into standard database or data warehouse. Thus the critical difference between the two is data mining looks for patterns in data.

A predictive analytical model is built by data mining tools and techniques. Data mining tools extract data by accessing massive databases and then they process the data with advance algorithms to find hidden patterns and predictive information. Though there is an obvious connection between statistics and data mining, because methodologies used in data mining have originated in fields other than statistics.

Data mining sits at the common borders of several domains, including data base management, artificial intelligence, machine learning, pattern recognition, and data visualization. Common data mining techniques include artificial neural networks, decision trees, genetic algorithms, nearest neighbor method, and rule induction.
Some vendors have been in the predictive analytical tools sector for decades; others have recently emerged. This section will briefly discuss the capabilities of key vendors in predictive analytics.

SAS

SAS is one of the leaders in predictive analytics. Though it is a latecomer to BI, SAS started making tools for statistical analysis at least thirty years prior, which has helped it to move into data mining and create predictive analytic tools. Its application, SAS Enterprise Miner, streamlines the entire data mining process from data access to model deployment by supporting all necessary tasks within a single, integrated solution. Delivered as a distributed client-server system, it is well suited for data mining in large organizations. SAS provides financial, forecasting, and statistical analysis tools critical for problem-solving and competitive agility.

SAS is geared towards power users, and is difficult to learn. Additionally, in terms of real-time analytics, building dashboards and scorecards, SAS is a laggard compared to competitors like Cognos, Business Objects, and Hyperion; however, its niche product in data mining and predictive analytics has made it stand out of the crowd.

SPSS

SPSS Inc. is another leader in providing predictive analytics software and solutions. Founded in 1968, SPSS has a long history of creating programs for statistical analysis in social sciences. SPSS today is known more as a predictive analytics software developer than statistical analysis software.

SPSS has played a thought-leadership role in the emergence of predictive analytics, showcasing predictive analytics as an important, distinct segment within the broader business intelligence software sector. SPSS performs almost all general statistical analyses (regression, logistic regression, survival analysis, analysis of variance, factor analysis, and multivariate analysis) and now has a full set of data mining and predictive analytical tools.

Though the program comes in modules, it is necessary to have the SPSS Base System in order to fully benefit from the product. SPSS focuses on ease; thus beginners enjoy it, while power users may quickly outgrow it. SPSS is strong in the area of graphics, and weak in more cutting edge statistical procedures and lacks robust methods and survey methods. The latest SPSS 14.0 release has improved links to third-party data sources and programming languages.

Insightful

Along similar lines is Insightful Corporation, a supplier of software and services for statistical data analysis, data mining of numeric, and text data. It delivers software and solutions for predictive analytics and provides enterprises with scalable data analysis solutions that drive better decisions by revealing patterns, trends, and relationships. Insightful's S-PLUS 7, is a standard software platform for statistical data analysis and predictive analytics. Designed with an open architecture and flexible interfaces, S-PLUS 7 is an ideal platform for integrating advanced statistical techniques into existing business processes.

Another tool offered by Insightful is Insightful Miner, a data mining tool. Its ability to scale to large data sets in an accessible manner in one of its strengths. Insightful Miner is also a good tool for data import/export, data exploration, and data cleansing tasks, and its reduces dimensionality prior to modeling. While it has powerful reporting and modeling capabilities, it has relatively low levels of automation.

StatSoft Inc.

StatSoft, Inc. is a global provider of analytic software. Its flagship product is Statistica, a suite of analytics software products. Statistica provides comprehensive array of data analysis, data management, data visualization and data mining procedures. Its features include the wide selection of predictive modeling, clustering, classification and exploratory techniques made available in one software platform. Because of its open architecture, it is highly customizable and can be tailored to meet very specific and demanding analysis requirements. Statistica has a relatively easy to use graphical programming user interface, and provides tools for all common data mining tasks; however, its charts are not easily available for the evaluation of neural net models. Statistica Data Miner another solution that offers a collection comprehensive data mining solutions. It is one of two suites that provides a support vector machine (SVM), which provides the framework for modeling learning algorithms.
Knowledge Extraction Engines (KXEN) is the other vendor that provides a suite that includes SVM. KXEN is a global provider of business analytics software. Its self-named tool, KXEN provides (SVM) and merges the fields of machine learning and statistics.

KXEN Analytic Framework is a suite of predictive and descriptive modeling engines that create analytic models. It places the latest data mining technology within reach of business decision makers and data mining professionals. The key components of KXEN are robust regression, smart segmenter, time series, association rules, support vector machine, consistent coder, sequence coder, model export, and event log.

One can embed the KXEN data mining tool into existing enterprise applications and business processes. No advanced technical knowledge is required to create and deploy models and KXEN is highly accurate data mining tool and it is almost fully automatic. However, one record must be submitted for every entity that must be modeled, and this record must contain a clean data set.

Unica

Affinium Model is Unica's data mining tool. It is used for response modeling to understand and anticipate customer behavior. Unica is enterprise marketing management (EMM) software vendor and Affinium Model is a core component of the market-leading Affinium EMM software suite.

The software empowers marketing professionals to recognize and predict customer behaviors and preferences—and use that information to develop relevant, profitable, and customer-focused marketing strategies and interactions. The automatic operation of the modeling engine shields the user from many data mining operations that must be manually performed by users of other packages, including a choice of algorithms.

Affinium is an easy to use response modeling product on the market and is suitable for the non-data miner or statistician, who lacks statistical and graphical knowledge. New variables can be derived in the spreadsheet with a rich set of macro functions; however, the solution lacks data exploration tools and data preparation functions.

Angoss Software Corporation

Another leading provider of data mining and predictive analytics tools is Angoss Software Corporation.

Its products provide information on customer behavior and marketing initiatives to help in the development of business strategies. Main products include KnowledgeSTUDIO and KnowledgeSEEKER, which are data mining and predictive analytics tools. The company also offers customized training to its clients, who are primarily in the financial services industry.

Angoss developed industry specific predictive analytics software like Angoss Expands FundGuard, Angoss Telecom Marketing Analytics, and Angoss Claims & Payments Analytics. Apart from financial industry Angoss software is used by telecom, life sciences, and retail organizations.

Fair Isaac Corporation

Along similar lines, Fair Isaac Corporation is the leading provider of credit scoring systems. The firm offers statistics-based predictive tools for the consumer credit industry. Model Builder 2.1 addresses predictive analytics, and is an advanced modeling platform specifically designed to jump-start the predictive modeling process, enabling rapid development, and deployment of predictive models into enterprise-class decision applications. Fair Isaac's analytic and decision-management products and services are used around the world, and include applicant scoring for insurers, and financial risk and database management products for financial concerns.

IBM

Not to be left out, the world's largest information and technology company, IBM also offers predictive analytics tools. DB2 Intelligent Miner for Data is a predictive analytical tool and can be used to gain new business insights and to harvest valuable business intelligence from enterprise data. Intelligent Miner for Data mines high-volume transaction data generated by point-of-sale, automatic transfer machine (ATM), credit card, call center, or e-commerce activities. It better equips an organization to make insightful decisions, whether the problem is how to develop more precisely targeted marketing campaigns, reduce customer attrition, or increase revenue generated by Internet shopping.

The Intelligent Miner Scoring is built as an extension to the DB2 tool and works directly from the relational database. It accelerates the data mining process, resulting in the ability to make quicker decisions from a host of culled data. Additionally, because D2B Intelligent Miner Scoring is compatible with Oracle databases, companies no longer have to wait for Oracle to incorporate business intelligence capabilities into their database product.