What’s Next for AI

This blog encapsulates the insights gleaned from a collaborative conversation among the Architects and senior engineers at High Plains Computing. In this engaging discussion, we explore the near-term prospects of AI, delving into the uncharted territories that lie beyond LLM and the existing generation of AI models. The diverse perspectives shared by our team members provide a comprehensive and forward-looking exploration of the future landscape of artificial intelligence.

Modular AI

Biological neural networks have greatly influenced deep learning neural networks. Convolutional Neural Networks (CNNs) are one example of natural and artificial neural networks operating similarly. Biological neural networks are structured into nuclei, each with a unique function, receiving inputs from a group of nuclei and transmitting its output to other nuclei.

The figure below (Figure-1) shows the basal ganglia nuclei circuit and malfunction in this circuit results in Parkinson’s disease.


This circuit is for the smooth execution of Voluntary movements, and the Nucleus labeled STN (Subtantia Negra) malfunctions and does not inhibit medial globus pallidus, leading to a hyperactive Thalamus that is generally blamed for disease.

Non-biological neural networks or AI models are not built like modules; we add conventional dumb code to connect models where necessary, but these code snippets are not general-purpose trainable machines.

Figure 2 below shows the current generation mechanism for Guardrails.  Replies from LLM are modulated using Reinforcement learning with humans in the loop (RLHL)


The human loop teaches the reward model how to adjust the weights of LLM using reward /punishment-based learning. 

This mechanism works, but it’s not a general-purpose design and is undoubtedly not how biological neural networks work. The problem with this approach is that the LLM model stays as

the central piece of architecture. Instead of independently blocking its output, the rewards model just does weight adjustment of the LLM.

A better and moew  modular design for such guardrails is shown below (Figure-3)


The reward/punishment model has been generalized as a safety network, and just like LLM, it projects its outputs to the final dense network, and it learns to inhibit specific LLM outputs and reject some prompts altogether.

This design accommodates additional guardrail networks, contributing to a final global feed-forward network. One of the unique features of this design is that each module can be independently trained to specialize in its domain. The last linear network combines the other networks’ outputs to generate the final output.

Stateful AI

Currently, most deployed AI models do not remember prior interactions. Biological counterparts are, however, almost always stateful.

One typical example is worry and assurance networks that are implicated in anxiety and obsessive-compulsive disorders.

A typical scenario is when someone leaves his house, and a common intrusive thought is if I forgot to lock the front door. Neural networks in the prefrontal cortex immediately query the hippocampus(long-term memory storage area) to see if this particular scenario has happened before and if the outcome was terrible. If nothing terrible returns, these assurance networks Dampan

Worry network projections to the limbic system decrease, and the person simply ignores intrusive thoughts.

The figure below shows the architecture of a potential stateful neural network.


In this architecture, the prompt input and LLM output are temporal encoded to become an LLM usage experience that can be stored in a vector database.

The closest experiences are retrieved and fed to the output network for each LLM usage. This way, the output network can add a yawning expression to the repeated and boring prompts. Such LLM models would be customer service agents who get tired and bored like human agents.

Integrated Models

Integrated models are already available as multi-model AI networks, e.g., Open AI CLIP  that integrates text with videos. The current generation of multi-model AI networks concatenate embedding vectors instead of fusing all sensor inputs. Shortly, all such models will evolve to fuse and integrate inputs from any other model.

Like multi-model AI networks, integrated models will also utilize CNN models for audio/video, GPS location, and other motion sensors. They will be truly integrated so each module’s projections can boost or inhibit the final output.
The figure below (Figure-5) shows an integrated Question /Answer Bot. with the true fusion of additional sensor inputs.


Integrated networks let AI models modulate their output based on circumstances and environment. For example, a question-answer bot can read facial expressions to understand motives better.  Speech intonation and emphasis can let it better understand questions, and location sensors may help personalize or regulate output. Hopefully, this will equip AI models with enough intelligence to play dumb in front of legislators trying to ban them.

Integrated networks are different from modular as they are all trained together to generate an integrated output.

About Author

Ajmal Mahmood is Chief engineer for High Plains Computing (HPC).

Social Share :

Introducing Amazon Q

Overview Amazon Q is a new-gen AI solution that provides insights into enterprise data stores.…

Python Performance improvements

Python is a widely used programming language with a diverse range of libraries and frameworks,…

What is Retrieval Augmented Generation

What is Retrieval Augmented Generation Introduction Retrieval-augmented generation (RAG) is a cutting-edge technique that combines…

Ready to make your business more efficient?