BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Understanding The Ingredients In An AI Recipe

Following

Software is a mixture. We can liken enterprise software application development to the process of making soup i.e. there is plenty of scope for experimentation and the introduction of new ingredients or techniques, but there are also recipes for how to do it right. Indeed, a well-known brand of technical learning publications is known as the ‘cookbook’ series, it’s a parallel that works.

As software programmers now work to prepare, clean, pare-down and combine the ingredients in the models we use to build the new era of generative Artificial Intelligence (AI) and its Machine Learning (ML) power, it is worth thinking about the process in this way so that we understand the ingredients in the mixtures being created.

Start simple & small

After the (arguably justifiable) hype cycle that drove the popularization of Large Language Models (LLMs) in line with generative AI, the conversation playing out across the software industry wires turned to ‘large is good, but small is often more beautiful’ i.e. in the sense that smaller models could be used for more specific tasks… and actually, starting small and simple is quite sensible in any major pursuit.

Director of product management at Hycu Inc. Andy Fernandez says he can’t emphasize enough how important it is on the developer’s Large Language Model (LLM) journey to start small and simple. He thinks software engineers need to identify specific use cases that are not mission-critical where the team can build AI/ML muscle before fully integrating AI into the organization’s IT ‘products’ in live working operations. It is this process of identifying small but critical use cases to use as a testing ground before implementation that makes all the difference. Examples could include work carried out to streamline documentation or to accelerate scoping exercises to analyze future work.

“This step-by-step progression will provide learning and rapid feedback loops, on which to build the maturity required to maximize the use of LLMs. This approach to integrating AI/ML in software development ensures a solid foundation is built, risks are minimised and expertize is developed – all elements contributing to success,” advised Fernandez. “At the start, it’s also critical that you assign a stakeholder who is responsible for diving deeper and understanding how this works, how to interact with the model and how to spot anomalies. This provides clear ownership and rapid actions."

Hycu (stylized as HYCU in the company’s branding and pronounced ‘haiku’ as in Japanese poetry) is a Data Protection & Backup-as-a-Service company known for managing enterprise software systems with ‘hundreds’ of data silos requiring ‘multiple’ backups. Hycu Protégé is a Data Protection-as-a-Service (DPaaS) that makes it possible for companies to have purpose-built solutions for all their workloads that can be managed via a single view. Logically then, the type of software platform that can make good use of AI/ML if it is intelligently applied.

Choosing the right LLM

If we are saying that the LLM is the ingredient (actually it should be ingredients, plural) behind the soup that finally becomes our AI, then we need to treat it with care. For smaller tasks, a simple ‘wrapper’ (an intermediary software layer designed to direct and channel the knowledge and intelligence that a foundational language model can provide) around an existing LLM might suffice.

“However, not all tasks require a foundational LLM,” explained Fernandez. “Specialized models often better suit niche needs. However, when integrating LLMs into the development menu, it's vital to choose carefully, as the chosen platform often becomes a long-term commitment. OpenAI's GPT series offers flexibility that can meet a variety of tasks without specific training and has a broad knowledge base given the vast repository of knowledge it has access to. AI21 Labs' Jurassic Models are known for scalability and strong performance especially when it comes to language understanding and generation tasks.”

After selecting the initial AI/ML formula to test, understanding exactly how the LLM works and how to interact with its Application Programming Interface (API) is of foremost importance. Organizations need to realize that ay least one person (the head AI chef, if you will) needs to understand the model's strengths and weaknesses in detail and fluently.

“For basic tasks like enhancing documentation, senior team members should closely evaluate the outcomes, ensuring they align with objectives,” said Hycu’s Fernandez. “Deeper understanding is necessary for advanced tasks like integrating AI into products, where issues like data hygiene and privacy are paramount. Furthermore, using cloud infrastructure and services can unlock different AI/ML use cases. But it’s still essential to understand how the cloud and AI/ML can best work in tandem.”

AI guardrails

Ensuring the quality of data used in LLMs is also critical. Everyone on the team must constantly test and question the outputs to make sure mistakes, hallucinations and inadequate outputs are spotted and resolved early. This is where the importance of specialists cannot be ignored. The outputs of AI are not infallible and developers must act accordingly.

Fernandez here notes that there are several ‘guardrails’ to consider in this regard. For instance, from a data sanitization perspective, enterprises must be strict and critical when selecting a provider. This requires evaluating how providers communicate their data processing methods, including data cleaning, sanitization and de-duplication.

“Data segmentation is vital to keep the open data that is accessible to the LLM and the mission-critical or sensitive data physically and logically separate,” insisted Fernandez. “The organization must also conduct periodic audits to ensure that the data processing and handling comply with relevant data protection laws and industry standards. Using tools and practices for identifying and redacting personally identifiable information (PII) before it is processed by the LLM is vital.”

Furthermore, an organization must establish processes for reviewing the LLM's outputs (tasting the broth as it is cooked, right?), especially in applications where sensitive data might be involved. As such, implementing feedback loops where anomalies or potential data breaches are quickly identified and addressed is critical. It's also essential to stay informed about legal and ethical considerations, ensuring a responsible and safe use of technology.

The source of open source

“We need to remember that closed source (i.e. as opposed to open source) LLMs, recommended for companies with proprietary information or custom solutions, suit the need for strict data governance and dedicated support. Meanwhile, open source LLMs are ideal for collaborative projects without proprietary constraints. This choice significantly impacts the efficiency and safety of the development process," said Hycu's Fernandez. “Developers can also consider using prompt injections. This is using a prompt that alters the model and can even unlock responses usually not available. Most of these injections are benign and involve people experimenting and testing the limits of the model. However, some can do so for unethical purposes.”

Looking into the AI kitchen of the immediate future, we may likely find an increasing amount of language models and associated tooling given the automation treatment. It is only logical to automate and package up (like a ready meal) easily repeatable processes and functions, but it will still be a case of reading the ingredients lists, even if we can put some elements of our mixture through at microwave speed.

This notion is allied to Fernandez’s closing thoughts on the subject, as he expects LLMs to become more specialized and integrated into various industries. “This evolution mirrors the ongoing integration of AI into various enterprise applications. We will also see the introduction of AI into the enterprise fabric. For instance, Microsoft Copilot and AI integrations into GitHub,” he said.

Software will always be a mixture of ingredients, prepared to a specific recipe with many opportunities for experimentation, fusion and combination - and AI is a perfect breeding ground for more of those processes to happen. Just remember the guardrails so we know when to turn the oven off, think about who is going to really fluently understand what’s happening in their role as head chef... and assign the right responsibilities to the appropriate people to avoid too many cooks.

Follow me on Twitter or LinkedIn