top of page

The way we live is being molded by technology. The internet, for instance, has transformed communication by connecting over 5 billion individuals worldwide. Medical technology and genomics impact the life expectancy among modern societies. The manufacturing revolution changed the way we eat, consume and work. Social media influences our thoughts, emotions and actions, with a daily average of more than 2 hours of cognitive content. As AI can be utilized with these technologies, it possesses the capacity to revolutionize various industries such as healthcare, finance, education, and more. Therefore, it has the potential to fundamentally transform the way we live.



There are many articles discussing the long-term aspects of AI implementation, as well as the potential emergence of AGI. However, given the complexity of such assessment, it is too difficult to predict how and when things will unfold precisely. Therefore, as a first step, this article focuses on the current status of AI adoption in business operations and the actions businesses are currently taking to enhance their performance. This post could also be used as a trajectory for areas that businesses are headed to. We will highlight five main drivers for this motion as well as their associated friction elements:


  1. Costs (follow the money) - AI has the potential to reduce costs in various ways, such as automating data discovery, providing insights, and delegating tasks. Employing AI for discovery involves an automated detection of hidden patterns in the data, and could be applied in diverse fields, from sales to security. For instance, grouping customers into clusters using different channels can reveal pricing anomalies and increase profitability. Additionally, AI can generate insights by leveraging forecasting and segmentation techniques, enabling data-driven decision-making. Furthermore, businesses can delegate tasks to machines or bots, such as automating marketing campaigns or customer success services. However, implementing AI entails some investment, including R&D, acquiring AI expertise, integrating AI solutions into existing workflows, and upgrading IT infrastructure to support AI operations. As a result, AI transformation comes with short-term expenses related not only to the shift from digital to AI but also to the transition from traditional machine learning models to more advanced deep neural networks.

  2. Efficiency (smooth operations) - Efficient operations are essential to meet the expectations of modern customers who demand speedy and smooth processes. AI can serve as a potent means of achieving such optimization by automating repetitive patterns and simplifying workflows. For instance, an AI-based supply chain management system can optimize inventory levels and shipping routes, lowering expenses and enhancing delivery times. Likewise, automating aspects of CRM can leverage customer feedback loops to shorten operation time. Nonetheless, integrating AI technologies necessitates modifying business procedures and infrastructure, and employees will need time to learn new skills. In the near term, there may be interruptions as employees adjust to new work methodologies.

  3. Differentiation (leverage and liability) - AI services can provide businesses with a competitive edge and drive improved outcomes by utilizing the unique data streams within many business cycles. For instance, voice, text, images and time dependent signals can be injected into automated processes, leveraging AI models for insights, recommendations and automation, as well as improve customer-facing responses. At the same time, the “black box” nature of AI models and the rapid evolution of this domain raises legal and liability risks regarding the data the models are using. Therefore, businesses are required to balance the leverage they have with their unique data as well as mitigating legal and privacy limitations.

  4. Agility (keeping up with the pace) - Being able to meet customer expectations is a vital factor for any business to thrive. Today, customers demand personalized and prompt responses, transparent communication, and reliable services that cater to their specific requirements. To satisfy these demands in a cost-effective manner, companies employ AI-based chatbots and virtual assistants that provide round-the-clock customer service, facilitating speedy issue resolution. However, the fast-paced advancements in AI technology also pose a challenge to the process of AI transformation, from the design phase to implementation and maintenance, as modern solutions can become outdated in a short period of time.

  5. The human factor (organizational culture and emotions ) - The foundation of any business encompasses human-oriented elements, such as leadership, communication, vision, empathy, critical thinking, novelty, and organizational culture, that cannot be assigned to machines. The process of AI transformation also entails recruiting, educating, and restructuring the company, and like any transformation, is associated with intense emotions linked to change. In some instances, resistance to change may arise, while in others, a high level of energy may emerge from the potential for renewal and reinvention of traditional patterns to align with future prospects.


Although the factors mentioned above are generally applicable across different industries, each company must consider its distinct attributes when devising its AI strategy, such as industry, regulations, size, digital readiness, and other pertinent factors. The swift emergence of extensive language models has accelerated the adoption of AI significantly, and many c-level executives are ensuring that AI transformation is part of their overall strategy. Although an AI strategy should be assessed on a case-by-case basis, there are three essential measures that can facilitate a successful transformation.


  1. Education and training - It's important to ensure that employees possess the requisite knowledge and skills to effectively utilize AI and enhance business operations. Additionally, it's important to be mindful of potential risks and liabilities, and establish mechanisms for accountability.

  2. Positioning - To maximize the benefits of AI, companies should conduct a thorough analysis of their business structure and strategy. This includes analyzing and deconstructing their unique business structure or strategy to identify potential opportunities and minimize risks.

  3. Incremental investments - Making small-scale investments to cultivate an AI culture and develop effective practices. This should enable a gradual cultivation of effective practices, which can lead to significant long-term benefits.


If you have any question, feedback or comment, please feel free to contact and share your thoughts.


Oren Elisha

Updated: Mar 13, 2023

Historical events such as the attack on Pearl Harbor and the Yom Kippur war are historical reminders for the impact of unexpected surprises. Although relevant information was available in these cases, the ability to analyze and act upon it was lacking, resulting in far-reaching consequences. To avoid being caught off guard by unexpected events, government agencies are investing significant resources in collecting and analyzing relevant data to provide early warnings and alerts.



In the business world, strategic surprises come in the form of disruptive events that can alter the competitive landscape. These can include rapidly advancing technology or new opportunities that transform business practices. As in the case of government agencies, many companies are doing their best to be prepared for these events to remain competitive in an ever-changing business landscape. In order to stay competitive in a dynamic business environment, larger companies allocate significant resources to their CTO and CIO offices, while smaller businesses rely on their top executives to closely monitor emerging trends and patterns.


In early 2023, as seen in the Google trend figure below, Generative AI had formed a disruptive wave of excitement and interest that caught many businesses by surprise. Its impact on various industries is still being analyzed, and it has the potential to fundamentally change business practices. In future posts, we will explore what may become obsolete and the opportunities it presents.

the result for searching "generative AI" at Google trends


In this post, I would like to discuss why many companies failed to anticipate the rise of generative AI, and suggest ways for organizations to become more resilient to changes in technology. By examining these issues, we could learn from past mistakes and become more resilient in the face of change. I acknowledge that such analysis seems to rely on the "wisdom of hindsight", however, the purpose of this analysis is not to judge past decisions but to break down past patterns in order to enhance future actions.


In the following sections, we will examine five key reasons for the strategic surprise in the context of generative AI. These include the crying wolf effect, underestimating technological advancements, decluttering noise from signals, overemphasis on short-term gains, and narrow focus on traditional business mental models.


The crying wolf effect - The initial hype around chatbots in 2016 led to high expectations from stakeholders about the potential of chatbots to revolutionize customer service and user experience. However, the reality of building a chatbot that could understand complex queries and provide helpful responses proved to be more challenging than anticipated. As a result, many chatbot projects failed to live up to their promises, leading to disappointment among stakeholders and a loss of trust in the technology. The high-profile discontinuation of chatbot projects, such as Facebook's M, further contributed to the "cry wolf effect," where stakeholders became skeptical about the potential of new technology to deliver on its promises. This led to a decline in news coverage and funding for chatbot startups, as well as a shift in focus towards more proven technologies. More importantly, when initial signals for actual improvements in generating AI for natural text arised, they were often dismissed as “we've been there before..”. In these situations, establishing conviction for a genuine signal involves two key elements: demonstrating the presence of the signal as well as articulating why the current circumstances are different. This level of conviction demands a technical expertise, which brings us to the next point.


A profound understanding of technological advancements - The remarkable achievements of large language models, as reflected in ChatGPT, can be attributed to substantial advancements in three technical domains over the past decade. The first pertains to the capacity to convert natural language into a machine-readable format. This has evolved from the utilization of word2vec for words and then the progression from recurrent neural networks (RNNs) to attention based models employing transformers, which enable the capture of the semantic nuances of sentences. The second domain involves progress in deep neural networks, which have transitioned from producing outstanding outcomes in image classification to generative models. The third one, is the progress made in engineering capabilities, such as training on more extensive datasets and utilizing larger models. For instance, GPT-3 possesses 175 billion parameters that demand 800GB of storage and a considerable infrastructure and preparation for its training. To the untrained eye, these numbers may seem unremarkable, but the extensive investments in GPT-3 - encompassing significant human effort and expensive computing resources - reflect a pervasive confidence that larger models can catalyze a transition from classification to generation



Decluttering noise from signals - Businesses rely upon ongoing feedback in order to address current and future customers' needs. Most companies develop internal feedback loops that allow them to gather insights from a variety of sources, including their employees, customers, industry partners, as well as external consulting. According to Reuters, ChatGPT reached a record of 100 million monthly active users within less than two months of its launch. The maturity, adoption, and impact that generative AI is having on the industry landscape seems unprecedented. While analysis reports were available, it is not trivial to induce a strong “call for action” from such resources as in Gartner's 2022 hype cycle. Strategic planning in the context of disruptive technology is a broad topic that requires covering many aspects ranging from a “skin-in-the game” to the ways that models should and shouldn't be presented for decision making. Therefore, I intend to dedicate a future post solely to this subject, which I hope to share soon.


The role of AI in production - For certain sectors, AI was seen as a supplementary feature, restricted to delivering insights or recommendations, or as a specialized tool confined to a single area like computer vision. As a result, deploying and planning the AI strategy was often left for later phases in the product life cycle. This approach was largely driven by the startup mindset of focusing on minimal viable products (MVPs) and incremental bootstrapping. Part of the surprise element in the generative AI revolution is the sudden observation that the usage of large language models is transitioning from an incremental enhancement to a "must have" within the competitive landscape for staying relevant. This shift highlights the importance of understanding the role of AI in production and its potential impact on businesses. Therefore, even if AI components are scheduled for later phases in the development cycle, their conceptualization and design should be incorporated as an essential component of product and feature planning on early stages.


First-hand experience - As the number of available technologies is constantly growing, it is getting harder to keep track of emerging capabilities, and to a greater extent distinguish between production-ready technology from an immature marketing effort. While there is no substitute for our first hand experience, the next best thing is to maintain a dialog with trusted professionals to help us better understand the core of the technology and ring the alarm when needed. As previously highlighted, different businesses have different approaches to handling swift changes proactively. Nevertheless, even companies built on the premise of being agile must be alerted to changes that could cause significant upheaval.


In conclusion, the rise of generative AI caught many businesses by surprise, and it has the potential to fundamentally change many business practices. This is an example for the ongoing need to evaluate disruptive technologies with high potential for an impact on our industries. With a growth mindset, businesses can learn from past events and become more resilient in the face of change.


The integration of Artificial Intelligence (AI) into various industries has brought new opportunities for improvement and innovation. When planning an AI model to enhance a specific service, several crucial elements must be taken into account. These components play a significant role in determining the complexity and success of the project. In this post, we will delve into these factors, showcasing their individual importance and the interrelated nature that can impact the outcome of the project. Understanding these factors is key to effectively planning and executing a successful AI project.


Let's examine the key elements for structuring and planning successful AI deployment in production:


  • The service that businesses aim to improve with AI can vary, from customer service to marketing to process optimisation. It's important to clarify how the AI model will be utilized, for instance, for discovery, recommendation, insights, or automation. The level of automation is determined by the model's accuracy and vice versa. Due to the "black box" nature of many AI models, conducting a comprehensive sensitivity analysis is crucial to successfully automate processes with AI. Different industries also have varying sensitivity and risk factors based on the services offered, particularly in sensitive industries like finance and healthcare, which have unique constraints such as legal and ethical considerations, reputation, and liability."

  • The data that AI models can be fed with includes various input types including tabular data, images, text, and time-based data. Each data type requires different handling, expertise, and sometimes, models. Outputs from AI models can vary from strings, numeric values, vectors to more advanced forms like images, text, and sequences. It's crucial to comprehend the input and output of a model to design the proper pipeline for integrating into and with the model.

  • The AI model's hosting platform can range from cloud-based solutions, to on-premise systems, to mobile devices, each with its own distinct features and restrictions. The platform choice affects various aspects such as the model's size and intricacy, data volume, and available resources for deployment and upkeep. Additionally, different platforms call for varying code dependencies, libraries, and expertise. Knowing the advantages and limitations of each platform is also key for a successful AI deployment.

  • Machine learning (ML) entails training models to make predictions or decisions based on data. There are multiple machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning. Each has its own computation methods, like classification and regression in supervised learning and clustering and association in unsupervised learning. If data and service are properly analyzed and limitations of the hosting platform are known, setting up machine learning should be straightforward. However, there is some room for error, like using a hammer instead of a screwdriver. An important milestone in this step is determining the appropriate metric for evaluating the model's quality.

  • Once the ML method is selected, the next step is choosing an algorithm and setting its hyperparameters. The algorithm should handle the data complexity and scale as well as to enable training that deliver optimal results. Therefore, in addition to setting the algorithm’s architecture, hyperparameters are set to achieve an improved outcome. With advancements in ML, a variety of algorithms are now available, including a wide range of decision trees, SVMs, and neural networks. The algorithm selection can be done through auto-ml or through a manual ROI analysis, considering the accuracy’s impact on the outcome (e.g. revenue).


Choosing the right setup for the interlinked five elements requires both a strong grasp of the business and a sound engineering framework. For example, the use of sentiment classification from text reviews, can vary greatly depending on the platform it is trained on, such as mobile or cloud, or the purpose it is being used for, like a dating app or homeland security. By following a thoughtful approach, businesses can develop AI models that are efficient, effective and scalable, thereby aiding their automation and goal achievement.


1
2

Subscribe

Join the email list to get updates on new relevant articles and content.

Thanks for submitting!

Contact

  • LinkedIn
  • Twitter

Please feel free to reach out 

Thanks for submitting!

bottom of page