A process manufacturer with a network of assets spread across Europe needed to respond more flexibly to changes in customer demand while maintaining high asset utilisation, low working capital and low transport costs.
The situation was complex. The assets were different and had their own characteristics. The outflow from the installations could not simply be stopped between production runs, and a change of material resulted in a massive production loss – although a product type change without a change of material was doable.
The producer had 25 production lines and served 1,000 customers with a total of 2,500 products. In short, the perfect complex planning issue for which our More Optimal platform was designed.
APPROACH
The planners had been working with a combination of SAP and Excel spreadsheets. They were handling a huge number of variables and attempting to incorporate increasingly shorter delivery times. The planners understood their trade, but the complexity of the puzzle was too great for the resources available. There was much to gain.
Our generic More Optimal platform makes it possible to create a customer-specific application in a short time, with all relevant planning rules built in. The platform is set up in close consultation with the user. First, the relevant Key Performance Indicators (KPIs) were defined. These included (1) demand fulfilment, (2) asset pull / productivity, (3) inventory, (4) transport costs and (5) planning effort.
In a number of joint work sessions, we established the planning process and drew up the rules for allocating products to the various production lines. In addition, the transport options relating to production locations and the rules for product changes were built in. By working closely with the planners at every step, we gradually developed the More Optimal platform, and this now shows in real-time the consequences of the decisions made by the planners and gives advice on how to improve the planning process.
The application is also used to evaluate what-if scenarios and their impact on the KPIs. The manufacturer uses this functionality as part of the annual planning and budgeting process and relies on it for concrete operational issues on a more regular basis.
Manufacturers that best navigate the challenges of the world they operate in have a number of characteristics in common.
1. A clear vision of how their manufacturing operation looks like and operates in three to five years’ time.
They have decided which markets and customers they want to serve and understand what it takes for their manufacturing operation to enable business success. They focus on a “vital few” strategic initiatives with clear deliverables, and timelines and drive consistent execution.
For most manufacturers business conditions are more volatile and ambiguous than ever. Therefore, they review on a regular basis their strategic initiatives in the context of developing conditions and adapt. However, their long-term course is stable.
2. An aligned operating model
Winning manufacturers align their operating model with their vision. They know that if they don’t, their defacto strategy (their day-to-day operation) will deviate from their intended strategy. And they keep organisational complexity low as complexity drives costs up and speed and flexibility down.
This means well-aligned and “leaned out” business processes, KPI’s that help to control the operation, unambiguous roles and responsibilities, decision power low in the organisation, a reporting structure that creates transparency and insight in the actual performance, and a meeting structure that facilitates effective, fact-based decision making.
3. Employees with a high level of ownership
Employees at all levels in the organisation feel co-owners of the company and demonstrate a relentless drive to eliminate performance bottlenecks.
They have the skills to be successful and make sure they acquire new skills in line with the evolving needs of the company.
4. Drive to eliminate complexity
Complexity creates costs and inflexibility. They consider each and every type complexity: product and service design, the design of production means, total cost of ownership of purchased goods and services, cost of ownership of a supplier, contractor or client,
5. Continuous investment in smart manufacturing
With an increasing digitalisation of their operations, they gain significantly in speed, flexibility, and productivity. They develop new business strategies and innovate products and services portfolios. In developing smart manufacturing, they not only focus on selecting the right technology, analytics programs, and algorithms but also nurture a digital culture and skills.
Ever since 2006 we have been supporting manufacturing companies to deliver on their vision. Please get in touch to explore how we could support you in becoming one of the winners.
In all sectors, companies are dealing with an increased frequency and magnitude of disruptions. Businesses must quickly scale down and then ramp up their operations once demand returns. They have to switch product portfolios depending on the availability of components. Some of the events that have caused havock in the past decade include the Fukushima earthquake and tsunami in Japan, Suez Canal blockage, lockdowns related to Covid19 and variants, semiconductor shortages(link resides outside Axisto), staff shortages, war in Ukraine, exploding energy costs(link resides outside Axisto), high inflation.
Understandably, most of these disruptions took leadership teams by surprise. The worst of these disruptions have taken a toll on business output, revenue and profitability. Recovery can take months or even years.
Process mining provides the much-needed overview of the end-to-end supply chain and provides better insight and information for better, proactive collaboration internally and in the overall supply chain. Process mining also provides proposals for decisions with their consequences for real-time optimisation of flows.
(*) click here for introduction to process mining.
PROCESS MINING – WHAT IT IS AND WHAT IT CAN DO
It provides all insights for targeted performance & efficiency improvements: fast, end-to-end and fact-based.
DISCOVER AND IMPROVE YOUR REAL PROCESSES
APPLICATIONS OF PROCESS MINING
FULL TRANSPARENCY
Instead of working with the designed process flow or the process flow that is depicted in the ERP system, process mining monitors the actual process at whatever granularity you want: end-2-end process, procure-2-pay, manufacturing, inventory management, accounts payable, for a specific type of product, supplier, customer, individual order, individual SKU. Process mining monitors compliance, conformance, cooperation between departments or between client, own departments and suppliers, etc.
OVERVIEW OF THE ENTIRE SUPPLY CHAIN
Dashboards are created to suit your requirements. These are flexible and can be easily altered whenever your needs change and/or bottlenecks shift. They create real-time insights into the process flow. At any time, you know, how much revenue is at stake because of inventory issues, what root-causes are and which decisions you can take and what their effects and trade-offs will be.
If supplier reliability is not at the target level at the highest reporting level, you can easily drill down in real-time to a specific supplier and a particular SKU to discover what is causing the problem in real-time. Suppliers could also be held to the best-practice service level of competitive suppliers.
MAKING INFORMED DECISIONS AND TAKING THE RIGHT ACTIONS
The interactive reports highlight gaps between actual and target values and give details of the discrepancies, figure A. By clicking on one of the highlighted issues, you can assign an appropriate action to a specific person, figure B. Or it can even be done automatically when a discrepancy is detected.
And direct communication with respect to the action is facilitated in real-time, figure C.
WRAP UP
Process mining is an effective tool to optimise the end-2-end supply chain flows in terms of margin, working capital, inventory level and profile, cash, order cycle times, supplier reliability, customer service levels, sustainability, risk, predictability, etc. Because process mining monitors the actual process flows in real-time, it creates full transparency and therefore adds significant value to the classic BI-suites. Process mining can be integrated with existing BI-applications and can enhance reporting and decision-making.
CHALLENGE
Companies that pack fresh products face massive complexity and unpredictability. They process many different products, all of which have specific requirements in terms of quality, class and size. They deal with a multitude of packaging requirements and variability in price agreements for each customer. And they handle huge swings in supply and demand. But the time frame in which packers must match supply and demand is short.
How do you balance customer requirements with product and process complexity to achieve high customer satisfaction and high ‘valorisation’? And how do you deal with last minute changes in supply and demand – for example, if a batch is rejected because it does not meet the quality requirements?
APPROACH
The packer had been using Excel spreadsheets to allocate products on packaging lines and carry out detailed line planning. This had caused misunderstandings and mistakes – and a higher workload than necessary for the planners. They were losing time creating iterative plans, and there was uncertainty about which version of the plan was most up-to-date and about which numbers were correct.
We knew that the More Optimal platform would resolve these problems and explained the benefits to our client. The need was so great and the benefits so obvious that the packer did not even want a ‘proof of value’, but immediately decided to develop and implement a dedicated application based on the More Optimal platform.
The goals were (1) to arrive at a workable schedule faster, (2) more efficiency in the operation, (3) shorter lead times relating to product freshness, (4) better demand fulfilment and (5) increased flexibility.
The More Optimal platform makes it possible to build a customer-specific application in a short time with all relevant planning rules built in. The application is set up in close consultation with the user. First, the relevant Key Performance Indicators (KPIs) are defined to quantitatively determine the quality of the allocation plan. Two of these KPIs were demand fulfilment and lead time (related to product freshness).
In a number of joint work sessions, we drew up the allocation rules for products and determined how products from suppliers should be allocated to customers. By working intensively with the packer, we developed a dedicated application that shows the consequences of the decisions made by the planners and gives advice for better planning. This application was further expanded with support from the planners in order to optimise the detailed planning per packaging line to minimise changeover times on the lines and to increase the throughput capacity (OEE) of the lines. The application measures the operational performance based on the agreed KPIs.
Recession on the horizon? Based on our own research and research by Bain & Company, Harvard Business Review, Deloitte, Gartner and McKinsey, we formulate 7 actions to accelerate your profitability during and straight after a recession. Figure 1 shows how big the difference is between winners and losers. This does not only apply to EBIT, after a recession, winning companies are also able to make significant strides in market share.
THE 7 MOST IMPORTANT ACTIONS TO BE AMONGST THE WINNERS
The key to success is preparation. Although, preparation is actually a wrong choice of words. Winners are winners, because they structurally sail close to the wind and have a clear vision. They are proactive, fast and decisive. They are financially prudent to absorb setbacks and seize opportunities as they arise. The seven actions below clearly indicate what that means in concrete terms.
1. CLEAR VISION AND ORGANISATIONAL ALIGNMENT
What will your business look like in three to five years? And in one year? What are the ‘vital few’ strategic initiatives and what is the path from strategy to concrete actions? Not only your leadership team needs to be committed and aligned, this applies to your entire organisation. Strategy Deployment is a powerful tool to maintain alignment and focus, monitor progress against plan and make rapid and appropriate adjustments in case of changing conditions.
2. UNDERSTAND YOUR STRATEGIC AND FINANCIAL POSITION
Mapping out your plans depends on your strategic and financial position (see figure 2).
3. FREE UP FINANCIAL RESOURCES
The focus is on aligning your spending with your vision and strategic initiatives; not blunt cost cutting. Zero-based Alignment is a good way to select and make lean those activities that are fully aligned. Non-aligned activities are stopped. The financial resources you free up can strengthen your balance sheet and/or support your investment agenda.
Currently we face high inflation. Supply chain problems and capacity bottlenecks are responsible for some of it, but their effect will dampen. Another cause is the sharp rise in energy costs as a result of the conflict in Ukraine and the resulting economic sanctions. In time, part of the costs will bounce back, but no longer to the old level. Costs will remain structurally higher due to urgency of the climate-change-driven energy transition. Furthermore, too much money is in circulation and its effect on inflation will also last longer.
The current high inflation can turn margins negative very quickly. Speed and flexibility are called for and selling prices have to go up. Raising prices in one go is difficult. It is better to do this in regular small steps. Possibilities depend on the strength of your brand and the market your company operates in. Make sure you retain the right customers in the process
4. RETAIN YOUR CUSTOMERS
Retaining customers is much cheaper than acquiring new ones. The margin impact is significant. Explore ways to help your customers through the economic downturn and particularly in the early upturn when the opportunities start to arise. Winners have already created the “currency” to invest. Just make sure you target the right customers.
5. PLAN FOR VARIOUS SCENARIOS
No one knows when and how a downturn will fully unfold and when the economy will start growing again. The winners have developed different scenarios, and they know how they should act in each scenario. This allows them to act quickly and decisively.
6. ACT QUICKLY AND DECISIVELY
Winning companies act quickly and decisively, in the downturn and particularly in the early upswing when the opportunities begin to emerge. They have already unlocked the financial resources to invest.
7. EMBRACE TECHNOLOGY
Not all companies have been equally aggressive in adopting new technologies. There are many opportunities here for improving efficiency or generating more value and thereby gaining a competitive advantage.
To emphasise the importance of technology even more.
Figure 3 shows the development of the total shareholder return before and after the recession of 2009/2010. It is clear to see how winners break away from the rest.
Harvard Business Review found that 70% of companies failed to regain their pre-recession growth rate in the 3 years following the recession. Only 5% of companies manage to develop a growth rate that is consistently above that of their competitors (quarter-over-quarter simultaneous growth of sales and profit margin).
Digital leaders are 3x more likely to achieve revenue and margin growth that exceeds industry!
Maintenance is a value creator rather than a cost generator. For asset-intensive industries, high uptime and reliability are critical to ensure return on assets; for asset-lighter industries, high uptime and reliability are critical in a just-in-time supply chain.
Current digital possibilities provide ample opportunities for Maintenance to play that all-important value-creator role. However, more often than not we see that the basics are just not in place: cooperation between Maintenance and Production is unproductive, mean time between repairs is too short, there is too much corrective maintenance versus preventive maintenance, maintenance backlog is growing, drawings are out of date as are maintenance plans, data is lacking and contractors are underperforming. The effects are too much downtime, unreliable production, low efficiency, high costs, too much working capital and dissatisfied employees.
Before deploying the various digital aids that are on the market nowadays, you must get the basics right. Key elements are:
organisational alignment
getting the maintenance strategy aligned with the business strategy
getting the structure of the basic maintenance processes right
fostering a deep and productive cooperation between Maintenance and Production
fostering a productive partnership with contractors
a powerful performance management system for understanding and acting on quality and productivity drivers
knowing what the critical equipment is
registering data on equipment behaviour, logging maintenance history and ensuring integrity of data
ensuring technical condition of equipment is at the sufficient level
ensuring quality execution of corrective maintenance: root cause elimination
ensuring timely and quality execution of preventive maintenance routines
using condition monitoring of equipment
getting the skills and behaviours right
If you already have all this is in place, the equipment performance will already be high and costs significantly lower. Gradually, in line with the growing maturity of the organisation, you can integrate digital aids to achieve the next levels of equipment performance, efficiency and even lower costs: IIoT (Industrial Internet of Things), smart equipment, mobile devices, wearables, digital twin, advanced analytics, predictive maintenance, seamless engineering, etc.
CHALLENGE
Developing new products and introducing the industrial production processes to support them is highly complex, especially when you’re at the limits of manufacturing technology. Increasing market demand only raises the pressure further, since each and every product manufactured can be sold. Also, demand for new product types was growing very fast. It appeared a toxic combination.
We had already worked with this business unit to solve its manufacturing problems and enable it to become a more reliable supplier, now the management team asked us back in to help them improve their innovation reliability and reduce their time-to-market for new products.
The situation in the innovation-to-market department was complex. There was strong demand for additional product types and the market was shifting from B2B to B2C, which meant a shift in product requirements. On top of that, additional resources were required to address problems in production, and the department was constantly hiring additional new product development resources. Competition was growing, so speed was of the utmost importance, and the improvement targets were extremely high.
APPROACH
Our analysis, which we conducted in close cooperation with the client, revealed three main problem areas:
1. Portfolio management
The innovation portfolio was too big, its content was inconsistent and the priorities were regularly changing. This situation had developed because of poor business and operations planning and, as a consequence, poor technology and product roadmap planning.
2. Resource management
a. The organisation had created a self-inflicted resource bottleneck. The problem was caused by trying to manage too many portfolio projects at the same time and by allocating too many projects to limited resources. The result was plummeting productivity.
b. The constant inflow of new hires was creating a skills issue. There was no time to train them, and knowledge was not readily accessible for the new hires because very little had been documented.
3. Project management of innovation projects
A project management process had been defined for only 2 out of 5 project categories. And because of time pressures, people were cutting corners in projects and tollgate discipline was poor. This behaviour was creating rework and thus project delays. Project quality was suffering, and this in turn was causing production problems and an increase in customer complaints.
We worked with the client to set clear goals to increase innovation output by reducing time-to-market and improving project reliability. The time-to-market target was a reduction from an average of 23 months to just 9 months. We set an aggressive 6-month timetable for achieving these goals and formed joint teams to drive the changes. Because the three main problem areas were very much interdependent and the lead time was short, we ran four workstreams in parallel: (1) single project management, (2) portfolio management, (3) business planning and roadmapping, and (4) knowledge capture and design rules. We selected six pilot projects to introduce the new ways of working and deliver actual results.
We set up a project governance structure, including a review team, a project team and several workstream teams, and established milestone deliverables. We used a combination of “waterfall” and “agile” approaches to get things done.
THE IMPLEMENTATION
Performance improvement programmes must carefully balance human and technical aspects if they are to deliver significant, sustainable results. A critical aspect for sustainability is the development of a deep local ownership of the solutions to the problems. Therefore, we approached the challenge by ensuring the solutions were found by a process of co-creation right from the start.
The developers just didn’t have any time to spare, but speed was essential, so we started by slashing the volume of projects in the portfolio. Next, we set priorities and reduced the number of projects allocated to the developers. This was a tough process as there were many invested interests. However, this reconfirmed the analysis finding that the business had to get its strategic and operational planning right.
During the project we identified five different project categories, ranging from large, complex innovation projects down to factory support (crash actions). For each category, we designed and implemented project process maps, which included the project management methodology with team meetings, tollgate reviews, tollgate criteria, along with tools relevant at each stage in the project.
We designed and implemented a portfolio management process and system with clear roles and responsibilities, set up review teams for various project categories, established criteria to allow / refuse projects into the portfolio and encouraged an attitude of “killing” projects as early as possible to eliminate waste and maintain a manageable portfolio. We also designed a process for allocating resources.
In parallel, we implemented five different business planning and roadmap processes, including a technology roadmap, a product roadmap and an application roadmap. To support the development of knowledge and skills, we established a process to capture and document learnings from all projects, regardless of whether they had been successful, unsuccessful or ditched.
The results were impressive. Time-to-market dropped from 23 weeks to 11 weeks within 6 months, with plans in place to meet the target level of 9 weeks. Equally important, the results were sustainable because the root causes had been identified and eliminated, and the solutions locked into the Performance Management Systems (PMSs) developed during the project. The PMSs also included key performance indicators to give managers and employees ready access to the quantifiable information needed to make fact-based decisions, both as teams and individually, and to take pro-active and predictive action.
Throughout the implementation, a balanced combination of human and technical aspects drove the successes, and solutions were added to the PMS to support sustainability. By creating and communicating the right culture from the very start, we helped the client establish and communicate roles and responsibilities for employees at all levels. As the project progressed, employees began to see the value of their own contributions and to understand how their own performance influenced that of others, both within their discipline and beyond. As this understanding grew, a culture of accountability and collaboration evolved. Clear goals were communicated in a common language that everyone could understand, and employees embraced the new systems, processes and ways of working as their own.
Attaining world-class supply chain management and collaboration means developing and managing supply chains and partnerships so that your company is flexible and resilient, with response times and delivery performance that will beat the competition.
Future supply chains need to cope with the long-term trends of mass customisation, ever shorter life cycles and the more recent volatile conditions that are here to stay. In these market conditions, many companies will benefit from a “smart” supply chain, which combines the drive to eliminate waste (i.e. anything that doesn’t add value) with agility and responsiveness (i.e. the ability to handle unpredictability with speed and flexibility).
A smart supply chain enables fast, flexible supply of tailor-made products at competitive cost levels. It excels in having few product and process quality issues, reduced operational costs, increased flexibility, and high internal process speeds. It integrates customers and business partners to create value in both the primary and support processes.
Building a smart supply chain requires a holistic approach that integrates product and process design, organisation design, and digital solutions:
an unambiguous supply chain strategy
product configuration for late postponement
processes that are aligned with strategy and designed for minimal order cycle times
a flat organisation with multidisciplinary teams and no silos
integration with partners throughout the supply chain
an aligned performance management system with real-time information from the end-to-end process
supply chain visibility with the ability for stakeholders throughout the supply chain to access real-time data related to the order process, planning, inventory, delivery and potential supply chain disruptions
Artificial Intelligence is hot. We can hardly do anything without coming into contact, consciously or unconsciously, with forms of Artificial Intelligence. And it is becoming increasingly important. This article is an introduction to the field of Artificial Intelligence. It starts with a definition and then explores the different sub-specialties, complete with description and some applications.
WHAT IS ARTIFICIAL INTELLIGENCE?
Artificial Intelligence (AI) uses computers and machines to imitate people’s problem-solving and decision-making skills. One of the leading textbooks in the field of AI is Artificial Intelligence: A Modern Approach (link resides outside Axisto) by Stuart Russell and Peter Norvig. In it they elaborate four possible goals or definitions of AI.
Human approach:
Systems that think like people
Systems that behave like people
Rational approach:
Systems that think rationally
Systems that act rationally
Artificial intelligence plays a growing role in (I)IoT (Industrial) Internet of Things, among others), where (I)IoT platform software can provide integrated AI capabilities.
SUB-SPECIALTIES WITHIN ARTIFICIAL INTELLIGENCE
There are several subspecialties that belong to the domain of Artificial Intelligence. While there is some interdependence between many of these specialties, each has unique characteristics that contribute to the overarching theme of AI. The Intelligent Automation Network (link resides outside Axisto) distinguishes seven subspecialties, figure 1.
Each subspecialty is further explained below.
MACHINE LEARNING
Machine learning is the field that focuses on using data and algorithms to imitate the way humans learn using computers, without being explicitly programmed, while gradually improving accuracy. The article “Axisto – an introduction to Machine Learning” takes a closer look at this specialty.
MACHINE LEARNING AND PREDICTIVE ANALYTICS
Predictive analytics and machine learning go hand in hand. Predictive analytics encompasses a variety of statistical techniques, including machine learning algorithms. Statistical techniques analyse current and historical facts to make predictions about future or otherwise unknown events. These predictive analytics models can be trained over time to respond to new data.
The defining functional aspect of these engineering approaches is that predictive analytics provides a predictive score (a probability) for each “individual” (customer, employee, patient, product SKU, vehicle, part, machine, or other organisational unit) to determine, to inform or influence organisational processes involving large numbers of “individuals”. Applications can be found in, for example, marketing, credit risk assessment, fraud detection, manufacturing, healthcare and government activities, including law enforcement.
Unlike other Business Intelligence (BI) technologies, predictive analytics is forward-looking. Past events are used to anticipate the future. Often the unknown event is of significance in the future, but predictive analytics can be applied to any type of “unknown,” be it past, present, or future. For example, identifying suspects after a crime has been committed, or credit card fraud if it occurs. The core of predictive analytics is based on capturing relationships between explanatory variables and the predicted variables from past events, and exploiting them to predict the unknown outcome. Of course, the accuracy and usefulness of the results strongly depends on the level of data analysis and the quality of the assumptions.
Machine Learning and predictive analytics can make a significant contribution to any organisation, but implementation without thinking about how they fit into day-to-day operations will severely limit their ability to deliver relevant insights.
To extract value from predictive analytics and machine learning, it’s not just the architecture that needs to be in place to support these solutions. High-quality data must also be available to nurture them and help them learn. Data preparation and quality are important factors for predictive analytics. Input data can span multiple platforms and contain multiple big data sources. To be usable, they must be centralised, unified and in a coherent format.
To this end, organisations must develop a robust approach to monitor data governance and ensure that only high-quality data is captured and stored. Furthermore, existing processes need to be adapted to include predictive analytics and machine learning as this will enable organisations to improve efficiency at every point in the business. Finally, they need to know what problems they want to solve in order to determine the best and most appropriate model.
NATURAL LANGUAGE PROCESSING (NLP)
Natural language processing is the ability of a computer program to understand human language as it is spoken and written – also known as natural language. NLP is a way for computers to analyse and extract meaning from human language so that they can perform tasks such as translation, sentiment analysis, and speech recognition.
This is difficult, as it involves a lot of unstructured data. The style in which people speak and write (“tone of voice”) is unique to individuals and is constantly evolving to reflect popular language use. Understanding context is also a problem – something that requires semantic analysis from machine learning. Natural Language Understanding (NLU) is a branch of NLP and picks up these nuances through machine “reading understanding” rather than simply understanding the literal meanings. The purpose of NLP and NLU is to help computers understand human language well enough so that they can converse naturally.
All these functions get better the more we write, speak and talk to computers: they are constantly learning. A good example of this iterative learning is a feature like Google Translate that uses a system called Google Neural Machine Translation (GNMT). GNMT is a system that works with a large artificial neural network to translate more smoothly and accurately. Instead of translating one piece of text at a time, GNMT tries to translate entire sentences. Because it searches millions of examples, GNMT uses a broader context to derive the most relevant translation.
The following is a selection of tasks in natural language processing (NLP). Some of these tasks have direct real-world applications, while others more often serve as sub-tasks used to solve larger tasks.
Optical Character Recognition (OCR)
Determining the text associated with a given image representing printed text.
Speech Recognition
Determine the textual representation of the speech on the basis of a sound fragment of a speaking person or persons. This is the opposite of text-to-speech and is an extremely difficult problem. In natural speech, there are hardly any pauses between consecutive words, so speech segmentation is a necessary subtask of speech recognition (see ‘word segmentation below). In most spoken languages, the sounds representing successive letters merge into one another in a process called coarticulation. Thus, the conversion of the analog signal to discrete characters can be a very difficult process. Since words are spoken in the same language by people with different accents, the speech recognition software must also be able to recognise a wide variety of inputs as identical to each other in terms of textual equivalents.
Text-to-Speech
The elements of a given text are transformed and a spoken representation is produced. Text-to-speech can be used to help the visually impaired.
Word Segmentation (Tokenization)
Splitting a piece of continuous text into individual words. For a language like English, this is quite trivial, as words are usually separated by spaces. However, some written languages such as Chinese, Japanese, and Thai do not mark word boundaries in such a way, and in those languages, text segmentation is an important task that requires knowledge of the vocabulary and morphology of words in the language. Sometimes word segmentation is also applied in, for example, making words in data mining.
Document AI
A Document AI platform sits on top of NLP technology, allowing users with no previous experience with artificial intelligence, machine learning, or NLP to quickly train a computer to extract the specific data they need from different document types. NLP-powered Document AI enables non-technical teams to quickly access information hidden in documents, e.g. lawyers, business analysts and accountants.
Grammatical Error Correction
Grammatical error detection and correction involves a wide range of problems at all levels of linguistic analysis (phonology/orthography, morphology, syntax, semantics, pragmatics). Grammatical error correction has a major impact because it affects hundreds of millions of people who use or learn a second language. In terms of spelling, morphology, syntax, and certain aspects of semantics, with the development of powerful neural language models such as GPT-2, this can be regarded as a largely solved problem since 2019. Various commercial applications are available in the market.
Machine Translation
Automatically translating text from one human language to another is one of the most difficult problems: all different kinds of knowledge are required to do it properly, such as grammar, semantics, real world facts, etc..
Natural Language Generation (NLG)
Converting information from computer databases or semantic intent into human readable language.
Natural Language Understanding (NLU)
NLU concerns the understanding of human language, such as Dutch, English, and French, which allows computers to understand commands without the formalised syntax of computer languages. NLU also allows computers to communicate back to people in their own language. The main goal of NLU is to create chat and voice-enabled bots that can communicate with the public unsupervised. Answer questions and determine the answer to a question in human language. Typical questions have a specific correct answer, such as “What is the capital of Finland?”, but sometimes open questions are also considered (such as “What is the meaning of life?”). How does understanding natural language work? NLU analyses data to determine its meaning by using algorithms to reduce human speech to a structured ontology – a data model made up of semantics and pragmatic definitions. Two fundamental concepts of NLU are intent and entity recognition. Intent recognition is the process of identifying user sentiment in input text and determining its purpose. This is the first and most important part of NLU as it captures the meaning of the text. Entity Recognition is a specific type of NLU that focuses on identifying the entities in a message and then extracting key information about those entities. There are two types of entities: named entities and numeric entities. Named entities are grouped into categories, such as people, businesses, and locations. Numeric entities are recognised as numbers, currency and percentages.
Text-to-picture generation
Describe an image and generate an image that matches the description.
Natural language processing – understanding people – is key to AI justifying its claim to intelligence. New deep learning models are constantly improving the performance of AI in Turing tests. Google’s Director of Engineering Ray Kurzweil predicts AIs will “reach human levels of intelligence by 2029“(link resides outside Axisto).
By the way, what people say is sometimes very different from what people do. Understanding human nature is by no means easy. More intelligent AIs expand the perspective of artificial consciousness, opening up a new field of philosophical and applied research.
SPEECH
Speech recognition is also known as automatic speech recognition (ASR), computer speech recognition or speech-to-text. It is a capability that uses natural language processing (NLP) to process human speech in a written format. Many mobile devices incorporate speech recognition into their systems to perform voice searches, e.g. Siri from Apple.
An important area of speech in AI is speech-to-text, the process of converting audio and speech into written text. It can help visually or physically impaired users and can promote safety with hands-free operation. Speech-to-text tasks contain machine learning algorithms that learn from large datasets of human voice samples to arrive at adequate usability quality. Speech-to-text has value for businesses because it can help transcribe video or phone calls. Text-to-speech converts written text into audio that sounds like natural speech. These technologies can be used to help people with speech disorders. Polly from Amazon is an example of a technology that uses deep learning to synthesise human-sounding speech for the purposes of e-learning and telephony, for example.
Speech recognition is a task where speech is received by a system through a microphone and checked against a database of large pattern recognition vocabulary. When a word or phrase is recognised, it will respond with the corresponding verbal response or a specific task. Examples of speech recognition include Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Assistant. These products must be able to recognise a user’s speech input and assign the correct speech output or action. Even more sophisticated are attempts to create speech based on brain waves for those who cannot speak or have lost the ability to speak.
EXPERT SYSTEMS
An expert system uses a knowledge base about its application domain and an inference engine to solve problems that normally require human intelligence. An interference engine is a part of the system that applies logical rules to the knowledge base to derive new information. Examples of expert systems include financial management, business planning, credit authorisation, computer installation design, and airline planning. For example, an expert traffic management system can help design smart cities by acting as a “human operator” to relay traffic feedback for appropriate routes.
A limitation of expert systems is that they lack the common sense people have, such as an understanding of the limits of their skills and how their recommendations fit into the bigger picture. They lack the self-awareness of people. Expert systems are not a substitute for decision makers because they lack human capabilities, but they can dramatically ease the human work required to solve a problem.
PLANNING SCHEDULING AND OPTIMALISATION
AI planning is the task of determining how a system can best achieve its goals. It is choosing sequential actions that have a high probability of changing the state of the environment incrementally in order to achieve a goal. These types of solutions are often complex. In dynamic environments with constant change, they require frequent trial-and-error iteration to fine-tune.
Planning is making schedules, or temporary assignments of activities to resources, taking into account goals and constraints. To design an algorithm, planning determines the sequence and timing of actions generated by the algorithm. These are typically performed by intelligent agents, autonomous robots and unmanned vehicles. When designed properly, they can solve organisational scheduling problems in a cost-effective way. Optimisation can be achieved by using one of the most popular ML and Deep Learning optimisation strategies: gradient descent. This is used to train a machine learning model by changing its parameters in an iterative way to minimise a particular function to the local minimum.
Intelligence is at one end of the Intelligent Automation spectrum, while Robotic Process Automation (RPA), software robots that mimic human actions, is at the other end. One is concerned with replicating how people think and learn, while the other is concerned with replicating how people do things. Robotics develops complex sensor-motor functions that enable machines to adapt to their environment. Robots can sense the environment using computer vision.
The main idea of robotics is to make robots as autonomous as possible through learning. Despite not achieving human-like intelligence, there are still many successful examples of robots performing autonomous tasks such as carrying boxes, picking up and putting down objects. Some robots can learn decision making by associating an action with a desired outcome. Kismet, a robot at M.I.T.’s Artificial Intelligence Lab, learns to recognise both body language and voice and respond appropriately. This MIT video (link is outside Axisto) gives a good impression.
COMPUTER VISION
Computer vision is an area of AI that trains computers to capture and interpret information from image and video data. By applying machine learning (ML) models to images, computers can classify and respond to objects, such as facial recognition to unlock a smartphone or approve intended actions. When computer vision is coupled with Deep Learning, it combines the best of both worlds: optimised performance combined with accuracy and versatility. Deep Learning offers IoT developers greater accuracy in object classification.
Machine vision goes one step further by combining computer vision algorithms with image registration systems to better control robots. An example of computer vision is a computer that can “see” a unique series of stripes on a universal product code and scan it and recognize it as a unique identifier. Optical Character Recognition (OCR) uses image recognition of letters to decipher paper printed records and/or handwriting, despite the wide variety of fonts and handwriting variations.
CHALLENGE
Faced with rapidly ever tougher global competition, customer demands and cost pressures, the management team of this manufacturing and technology licensing company needed to increase both the effectiveness and efficiency of its innovation/R&D process in order to secure future market opportunities. The challenge was to increase the success rate of innovation projects and to cut the time from product concept to market introduction by half.
The company had a central R&D department, but there were also people working on innovation in the various plants across Europe. The people in the plants were closest to the customer and were working mainly on applications, whereas those in the central R&D department were doing ‘blue-sky’ development.
APPROACH
We began by working together with the client’s European business team, the R&D hub and a representation of process engineers from the various plants throughout Europe to analyse the “as is” situation. Three main issues were identified:
The customer release process was problematic because samples did not meet customer requirements, and this had created the perception of the company being an unreliable supplier for its customers.
The increased number of additives being used was creating in-house manufacturability issues and additional complexity both in the plants and in the supply chain.
Instability in the innovation/R&D portfolio contributed to an increasing time-to-market.
However, the definition and management of product platforms was strong, as was the skills level
throughout the innovation/R&D organisation. Therefore, there were some solid elements we could build on.
We worked as a joint team with the European business team, the R&D hub and process engineers from the plants to identify the root causes of problems, establish key levers to turn the situation around, and set clear and challenging targets. We set up a project governance structure, including a review team, a project team and various workstream teams, and established milestone deliverables. We used a combination of “waterfall” and “agile” project approaches to get things done. We selected nine pilot projects to introduce the new ways of working and deliver actual results.
THE IMPLEMENTATION
Performance improvement programmes must carefully balance human and technical aspects if they are to deliver significant, sustainable results. A critical aspect for sustainability is the development of a deep local ownership of the solutions to the problems. Therefore, we approached the challenge by ensuring the solutions were developed by a process of co-creation right from the start.
We worked on three workstreams in parallel:
Integrated strategic and operational planning
We started to break down organisational silos by bringing people together from the business team, the R&D hub and the plants in a series of workshops to craft an integrated strategic and operational planning process and to create a management and reporting structure. This meant that when technology and product roadmaps were generated they were better aligned with the market requirements and timelines. This prevented the development of applications and platforms becoming intertwined.
Transparent portfolio management
We achieved greater transparency and alignment through broadening the employees’ skill base and developing the use of existing IT tools. This meant that employees were better able to deal with uncertainty and to understand investment alternatives when “go–no go” decisions were being taken at stage gates along the innovation process. The development of effective behaviours in the project teams and around these tollgates was paramount throughout the implementation.
design rules and complexity
The principle of product platforms/product families was well understood and adhered to; however, new applications were not being managed well. A variety of additives was being used to achieve the same properties, and this was creating more and more complexity both in manufacturing and in the supply chain. In addition, rules such as design for manufacture were not tightly managed. In one case the company’s client was deeply impressed by the time-to-market of the new product they required, and the properties were spot on. However, the problem was that manufacturing the new product caused the production output to drop by 30%.
We ensured that the design rules were more explicitly defined, documented and accessible for everyone. We also introduced clear accountabilities and responsibilities to tighten the process for releasing additives and managing their variety.
The “as is” situation at the start of our joint project provided a good basis to build on. Many of the elements of world-class innovation management were already in place. The performance improvement was due mainly to an improved organisational alignment (and integration), more effective behaviours and, in particular, a more disciplined use of tools and methods.
Of course, there are a range of useful multi-project-management IT tools that can enhance visibility and enable more effective project portfolio management; however, the challenge here is to foster the behavioural change and teamwork that is required to build on the IT capability and not to rely solely on the IT tool to change the way people work.
Innovation, in contrast to health, safety and environmental management, demands risk-taking. DuPont’s Robert A Cooper sums up the requirement neatly: “Don’t manage the risk of failure. Manage the cost of failure.” Achieving this goal does not mean avoiding failure; it means failing clearly and early. To facilitate this behaviour, we developed a clearly defined and staged project management process with tollgates and explicit tollgate criteria. In fact, processes were designed for various project categories. “Go–no go” decisions could now be made based on facts. The development of effective behaviours in the innovation project teams and around these tollgates was paramount throughout the implementation.