From the integration of cutting-edge technologies to the reimagining of traditional processes, my predictions highlight the transformative journey that lies ahead. Whether you're a CFO steering your organization through complexities, an accounts payable manager embracing new tools, or a stakeholder keen on understanding the future financial fabric, this post will help guide your navigation in the promising yet challenging terrain of enterprise finance and accounting in 2024. Join me as I explore these insightful forecasts, preparing you to not just respond to the changes, but to strategically leverage them for growth and excellence.
My first prediction is that the cost of building foundation models is expected to drop significantly in the coming year, driven by a confluence of factors. Advancements in model efficiency will require less computational power to train and run, and more powerful and cost-effective computing hardware and competitive pricing will make high-powered computing resources more accessible and affordable.
The growing availability of open-source models and community-driven initiatives are reducing the time needed to train, retain, and fine-tune models, plus innovations in data efficiency are all contributing to reduced expenses. Techniques for making models more data-efficient are reducing the need for vast datasets in training models, and many developers can kickstart their models with the available and ever-expanding sets of large language models (LLMs).
As a result of these reduced costs and advancements in model efficiency, the startups building foundation models are poised to have their valuations impacted. As the barriers to entry lower due to reduced costs, the market may see an influx of new startups, intensifying competition. While lower costs may necessitate distinct value propositions to stay competitive, they also open doors for innovation, market expansion, and reduced capital requirements.
Startups could potentially leverage these changes to attract investment by focusing on specialized applications such as those focused on finance, forming strategic partnerships, and utilizing unique datasets or proprietary algorithms. The net impact on valuation will hinge on a startup's ability to adapt and innovate in this evolving, cost-efficient environment, balancing the challenges with the newfound opportunities.
I think 20 24 will see the first major hack and data filtration from a major large language model provider. The surface area and the attack vectors for bad actors continue to change very rapidly with large language models and present the newest set of information based on a corpus of data that bad actors want to get their hands on. As LLMs become more integral to business operations and decision-making processes, the amount and sensitivity of the data they process increase. This makes them more attractive targets for cybercriminals looking to steal valuable information.
These fraudsters could then use the LLM data to essentially influence the applications that use that data and in some cases, inject false or bad data into results, queries, and other outputs. There will be at least one major provider for large language models that faces or discloses a major hack and data exfiltration.
Assuming this will happen, it will then drive the need for significant investments in data security, data, privacy, and encryption being extended to large language-type interactions and applications. AI governance will be a hot topic of discussion.
Overlooking or underestimating the security requirements of LLMs leaves exploitable gaps. Everything ranging from firewalls to data security and encryption tools, to access control techniques to prevent attacks on prompt, to regulated engineering tools will essentially become requisite by virtue of this potential for a major attack. But this will also ensure the natural evolution of how security tools adapt to changing environments for LLMs and how the enterprise adopts artificial intelligence in general.
Traditionally, when thinking about large language model usage, it has most commonly been text-centric or image-based. In truth, if LLMs essentially have different types of prompts that they can support for queries, including text, images, video, and audio, then their ability to understand and process various data types becomes exalted, offering enhanced performance and richer user interactions.
I believe starting this year and over time, multi-modal interactions will become more powerful and eventually more prevalent than single-mode interactions, which will still be available but will dwindle in popularity.
Multi-modal LLMs would process and correlate information from various sources and formats, providing a more integrated and holistic approach to data analysis and decision-making. This ability is particularly beneficial in complex operational environments where decisions are based on a wide array of information types. The multi-modal approach will become the de facto standard as the ability to show relative causation is much easier when text and imagery are combined.
For example, take a video clip of somebody else performing a certain process - it doesn't have to be a computer task. It could be a physical task. Now feed that into the loom of an LLM, and combine that with other prompts - PDF manuals, YouTube how-to videos, and other available information, and that becomes the basis for stitching together a sequence and output. Showing how to do the task is actually not just through instructions, but through visuals and other input.
The ability of multi-modal LLMs to engage users in more natural and intuitive ways would become invaluable as they cater to diverse user preferences and accessibility needs, leading to their increased adoption.
In parallel, the emergence and standardization of prompt engineering will be driven by the need to effectively harness the power of LLMs. Well-crafted prompts are crucial for eliciting the best performance from an LLM. As the use of LLMs becomes more widespread, the demand for standardized, effective prompt engineering practices will grow. Standardized approaches to prompt engineering help address ethical concerns and mitigate biases in LLM outputs, an increasingly important consideration as these models are deployed in more sensitive and impactful domains.
The move towards multi-modal LLMs as the operational standard is driven by their superior performance and versatility, while the development of standards in prompt engineering will be the response to the growing need for effective, consistent, and ethical utilization of these powerful models.
My next prediction is that traditional ERP providers and applications that have used forms and fields as the means of interaction and user interfaces will replace them with natural language interfaces.
Most systems now essentially require user input to be submitted through the form. Subsequently, there is a sequence of button pushes and click-throughs that move the user through a workflow, screen by screen. This will go away completely as the primary factor for engagement will be natural language interactions. The team on Star Trek had it right when they interfaced directly with the computer.
Moving forward, a human will tell the application what to do. The application will ingest these inputs and perhaps what outcome needs to be achieved. It could be voice-based, it could be text-based inputs, or it could be a combination of these different types, but fundamentally the human will interact with the application, naturally. The application will interpret what the human is providing and use that to string together the output action. Natural language interactions provide a more intuitive and accessible user interface.
The workflows bring together the deterministic outcomes that the business process demands. Speaking or typing becomes the primary mode of engagement and interaction. This also reduces the learning curve and streamlines the interpretation and action of the technology.
The rapid advancements in AI and Natural Language Processing (NLP) technologies make it more feasible and cost-effective for companies to implement natural language interfaces. By capitalizing on these technologies, companies can stay at the forefront of innovation and their competition.
This prediction essentially says we will begin to see a shift with professional services firms as they pivot from solely human labor-based projects to an AI-enhanced human-hybrid model that offers professional accounting and audit services.
This change will first start with disruption within the Big 4, and the models they use to gain insights and recommend actions. As clients become more tech-savvy, their expectations for efficient, technologically advanced services are increasing. A solely human-based approach will not provide the greatest insights or optimal strategies any more, so professional service agencies will take decades of human knowledge and find a way to transfer it into a proprietary large language model that would subsequently improve outcomes. Offering AI-based solutions would meet these expectations and improve client satisfaction and retention.
The way of the future will not be an enormous, big team project. Organizations will hire one of the Big 4 for an accounting audit and essentially get a four-person team whose work is augmented with AI to do the job over a shortened period of time. Forget hundreds of team members, working over years to solve corporate challenges at exorbitant costs. AI-based systems will process and analyze data much faster than humans, leading to increased efficiency and the ability to scale services to meet client demands without a proportional increase in more expensive human resources.
With this shift to AI models, companies will take advantage of achieving great success more quickly as these accounting firms use an AI-based service that is delivered from their proprietary knowledge base, and performs at a fraction of the cost. AI systems enhance the accuracy of financial data analysis and reporting and also assist in risk management by identifying patterns and anomalies that might indicate errors or fraudulent activities.
A hybrid model where AI handles routine and repetitive tasks, and human professionals focus on complex, strategic decision-making and client relationships ensures that while leveraging the benefits of AI, the firms maintain the high standards of accuracy, ethics, and personal touch that their clients expect.
I believe that RPA, which had been instrumental in helping the automation journey for business process management, is poised to be replaced by more sophisticated and flexible artificial intelligence and LLM-based solutions. AI and LLMs offer advanced capabilities beyond the rule-based processes of RPA. They understand and generate natural language, make predictions, and provide insights, making them suitable for more complex and nuanced tasks in accounting.
While RPA was great for repetitive, rule-based tasks, it was brittle and lacked the flexibility to handle unstructured data or adapt to new situations without reprogramming. AI and LLMs analyze unstructured data, learn from new information, and adapt to changes more fluidly. The integration capabilities of AI and LLMS surpass that of RPA and scale to handle larger and more complex datasets. This provides a more holistic approach to automation, combining data from various sources for better insights and decision-making.
AI/LLM services add strategic value to accounting departments by not just automating tasks but also providing predictive analytics, financial insights, and decision support, thereby elevating the role of the finance function within organizations. Over time, as AI technologies become more refined and accessible, the cost of implementing and maintaining AI/LLM solutions might become comparable to or even less than that of RPA, especially when considering the broader scope of tasks and the value of insights generated.
It's worth noting that this transition doesn't necessarily mean a complete replacement of RPA. Instead, it may involve a hybrid approach where RPA continues to handle well-defined, rule-based tasks while AI/LLM services are employed for more complex, judgment-based processes. The decision to migrate will also depend on factors such as the specific needs of the department, the nature of the tasks involved, and the cost-benefit analysis of making such a transition.
For my last prediction, I think customer service centers are likely to start experimenting with more advanced AI technologies, potentially moving towards systems that use Artificial General Intelligence (AGI) as the technology matures. However, it's important to note that true AGI – a system with the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to human intelligence – is still a topic of research and not yet a reality.
It seems likely that customer service centers might experiment with highly advanced AI systems that have some AGI-like features to gain better contextual understanding, leading to more accurate and relevant responses.
Advanced AI systems could be capable of learning from interactions and feedback, continuously improving their performance and adaptability to different customer needs and scenarios, helping to improve customer service and ultimately, satisfaction. With its ability to quickly process large volumes of data, AI could offer more personalized customer service by analyzing customer data and previous interactions to anticipate needs and tailor responses.
As I referenced earlier, future AI systems in customer service could employ multi-modal models that handle not only text and voice but also images and videos, providing a more comprehensive and intuitive service experience.
The exploration of advanced AI in customer service centers is likely to continue, driven by the desire to improve efficiency, reduce costs, and enhance customer satisfaction. However, ethical considerations, privacy concerns, and the need for human oversight will remain paramount as these technologies evolve and become more integrated into customer service operations.
In the fast-evolving realm of enterprise finance and accounting, staying ahead means recognizing and adapting to change swiftly. Insights into technological advancements, process innovations, and strategic shifts help equip CFOs, accountants, and stakeholders to navigate and capitalize on the upcoming transformations.
As we venture into 2024, I believe many of these will come to pass, or at least begin to percolate, and be poised to reshape finance and accounting in enterprises worldwide. So get ready and prepare for a future where agility and foresight are key to financial excellence.
All the best in 2024,