In today’s business, most organisations have two goals. The first is to produce a product or service, and the second is to supply that product or service to a customer. Businesses come in many different forms. For example, they can be service providers, manufacturers or retailers. Yet despite the variances between organisations, there are two things that consistently impact profitability:
• Overall cost to produce the product or service (Overheads)
• The quality of the brand, as perceived by the customer (Marketshare).
The reality is, improving both these aspects of business become more important over time. For example, when Apple designed the concept for the iPhone the original goal was how to make the device from concept to production. Once the product was created and the design and features were finalised and deemed to be well received by the public, the ongoing goals became, how do you cut the cost to manufacture an iPhone? Reducing costs doesn’t just apply to physical components and staffing. It also applies to business processes. The bottom line is cost-to-serve. Even small percentage can increase profit margins exponentially.

The second aspect that can have a huge impact on total profit is customer experience. In a world where Social Media is a way of life, poor customer experiences can be extremely damaging. There are many areas that can affect the customer experience including the initial sales process, after sales service and support, overall product quality, internal business processes, and general product performance.
The Traditional Approach
So how can opportunities to make better decisions be identified? Traditionally, opportunities are identified via:
• Intuition of internal industry experts within the business, who relay expert domain knowledge
• Identification of process inefficiencies by trial or error, or in response to customer feedback.
There’s a huge amount of risk when businesses make decisions based upon intuition. For many organisations, doing business in today’s world means operating in a fast paced, cut throat, warzone. And the need to be pro-active, while executing well informed, intelligent, measurable decisions, based on knowledge obtain via data-driven insight has never been greater. Let’s use a real-world case study of two Australian universities in the same city.
• University A requires an Australian Tertiary Admission Rank (ATAR) or 90 or more for entry
• University B requires an ATAR of 60 or more. Experts from both institutions accept that the
University A acquires the best performing students while the other university acquires from the rest. An analysis of the existing student enrollment data determined, while its true university A receives most top performers from the entire state, or even inter-state, university B only attracts students from within a 30km radius to the campus:

Since both universities deal in the same industry, industry experts assume they each get their fare share of student based on student academic level. But this is not so, why? Because the Degree obtained via completing a course from university B are held in less regard by employers and other institutions, putting them on par or even below other more fast-tracked career paths that can present to similar opportunities to the individual at a lower timeframe and/or financial cost. In addition, there are many more lower ranked institutions like University B, which causes proximity to be a determining factor for the potential students.
Data Driven Knowledge
Knowing that 99% of students in university B came from within a 30km radius of the campus was information not easily visible to the University’s Business Planning team. But how can this information be turned into knowledge?
Both institutions wished to build predictive models to predict student load (future enrolments). University A is able to use features related to the number of academic achievers regionally capable of making the grade, while University B needs to concern itself with the academic level of the students graduating from specific Schools situated within a certain radius to the campus.
A substantial portion of the data needed to predict the problem, for both institutions, has to be obtained from a third party data source. It is identified that Tertiary Services hold substantial data for all Kinder to year 12 (K-12) enrolments, and they are willing to make this constantly updated data source available for research. It is essential for each university to determine accurate future predictions, as the numbers are vital for determining budget planning and staffing requirements. Historically, numbers at both institutions have been forecasted out using a simple Linear Regression model, however these forecasts have typically always been out by a huge margin, leading to a consensus that there is no real linear relationship between enrolment numbers and time. Each University wishes to create a more accurate and insightful model to predict enrollment numbers and make budgeting and planning more accurate.
To build the models to predict Student Load, each university has to ingest and host the external data while providing the mechanisms for Data Scientists, Engineers, and Analysts to cleanse, engineer features, model, and build reports. The full K-12 data is defined as Big Data, in that the full data source is constantly growing, and is too big to process in a single node's memory. The best way to process Big Data is to continuously ingest it into a scalable platform which makes multiple nodes available for data processing, hence as the data grows, so does the handling power needed to process it. In today’s market, there are many choices including the Hadoop ecosystem, and MPP database’s both of which can be run on-premise or in the public cloud. On top of the data platform a compatible, scalable Data Science platform is required for modelling, such as Spark.
Due to the differing characteristics, each university needs a different model based upon different features.
• University A is based upon tertiary services data and macroeconomic data.
• University B is based more upon the characteristics of the students in the local catchment schools
In both cases, an order of magnitude reductions in the errors from the existing models is achieved.
Summary
It isn’t the type of business being delivered that determines opportunity, but rather, understanding the customer. Although two businesses may provide exactly the same product in exactly same industry, the customer groups attracted to each business may be very different. The only way to fully understand opportunity is to understand the customer through intelligent data exploration. And for the company who embraces this, understanding the customer opens the door to a myriad of possibilities with regard to Brand Management, Sales, Customer Loyalty, and Retention.
Underpinning the need for, and delivery, of a data-driven organisation is the access to the data. We live in the data age. Petabytes of valuable, insightful data are being generated every day. Data that can enable knowing, accurate, cost saving intelligence to drive Business Planning. However, after 20 years of trying, businesses still have data locked up in silos. The silos of today might have fancy names, like EDW, but the problems are still the same.
An inability to respond to internal and external change and rapidly provision data to validate the intuitions of the subject matter experts in the business.
The key to enabling a Data Driven organisation is enabling rapid access to data, fostering a culture of quantitatively proving intuitions with knowledge, and demonstrating the measurable value that can be derived from these insights.
CBIG Consulting are our Portfolio Partner for:
Chief Customer Officer Singapore, 11-12 July
Chief Data & Analytics Officer Singapore, 25-26 July



