top of page
bacckgroundcover123.png

Process Tempo Insights

  • Phil Meredith

Recommendation Engines: A Cheaper Alternative to LLMs

Creating a Large Language Model (LLM) to help predict customer purchasing behavior sounds amazing. Once flush with accurate historical context, the LLM could assist sales teams in finding new revenue. This is one of several scenarios that might convince an organization to invest in the development of an LLM.


LLMs are big right now. The latest shiny new object taking the analytics world by storm.


However, the notion of using historical information to help with decision making has all been done before, and at a much lower cost. The only difference is the introduction of unstructured data into the equation. Does this unstructured data move the needle in a big way? Or, is it the structured data that provides the most lift? This is the (multi) million dollar question every organization will need to consider when looking to implement LLM technology.


Recommendation Engines have been around for some time and have generated considerable return on investment across a number of industries and use cases. This ROI has been achieved by helping decision makers make smarter and more timely decisions. Their ease of implementation and the minimal amount of cloud resources they consume, should be considered if the organization is serious about going down the LLM path.


The amount of compute power required by an LLM can be staggering!


Courtesy: nvidia.com



How do Recommendation Engines work?


Recommendation Engines use the same exact historical context that you would provide a Large Language Model. For example, historical sales records or customer interactions.


This data is loaded into a graph that highlights the relationships between customers, products, orders, salespeople, etc. These relationships will aid the engine in providing a recommendation based on the pattern found within these relationships.


Sample Recommendation Engines include:

  • Which customers are most likely to purchase this product?

  • Which customer service representative or salesperson should be paired with this customer?

  • Which parts are the best substitutes for this one?

  • What products should I cross-sell with this one?

  • Which teachers will this student be most likely successful with?


These questions would be very similar to those that might be asked of a Large Language Model.

An important characteristic of a Recommendation Engine that is also worth calling out is the concept of explainability. Is it easy to understand (explain) how the computer came up with a given result? With LLMs, the logic is hidden in the model. Users have to have full trust in the results. On the other hand, the logic used by the Recommendation Engine is rooted in the data model itself and therefore much easier to explain and test.



Comparison between Recommendation Engines and LLM-based approaches

Large Language Model

Recommendation Engine

Leverages unstructured data

Yes

No

Requires model training

Yes

No

Compute power required

Extensive

Minimal

Explainability

Difficult

Transparent

Time to Benefit

Months

Weeks

Total Cost

Very High

Small in comparison



Building a Recommendation Engine in Five Easy Steps It is important to contrast the implementation effort between the two approaches. LLMs require a considerable amount of time and resources to develop while Recommendation Engines can be stood up quite quickly.


Let's walk through the steps of creating a recommendation Engine in Process Tempo:


Step One: Using the Process Tempo API connector, create a connection to your CRM system (such as Salesforce.com). Leverage an API that will return order details (the who, what, when, where of an order). This data will be used for our historical context and is likely the same exact data you would hand off to the LLM.


Step Two: Model this data into a graph using our Import Modeler. Here is an example of what a model could look like. Note the relationships that can be traversed across products, orders and customers. It is these relationships that will bring life to the Recommendation Engine. We have also included in this model support ticket information and associated metrics. This will tell us a bit more about that customer's use or success with a given product.



Step Three: Build the query. In this example we are attempting to look for purchase similarities across customers based on past purchases and products purchased together. This is a very simple, pattern-based query that could be further augmented with graph data science algorithms. It is literally only a few lines of query code:

MATCH (CUST:Customer)<--(SO:Order)-->(PROD:Product)<--(SO2:Order)-->(PROD2:Product)
WHERE NOT ((CUST)<--(SO)-->(PROD2))
With CUST.companyName AS Customer, PROD2.productName as Product, count(DISTINCT SO2) AS Strength
return Customer, Product, Strength 
order by Strength desc

Step Four: Now with Process Tempo we can allow users to ask basic questions of the data. For example, in this dashboard we leverage the above query and allow the user to filter the data on the customer's name or the name of the product. With a few clicks a sales person can see product recommendations for a given customer. The reverse is also true. Looking at the product, the salesperson can see which customers should have the most interest in it.



This dashboard will also allow us to perform testing to ensure the recommendation provided makes sense and offers the results we would expect.


Step Five: Lastly if we wish to expose the logic of this recommendation to external systems we should allow developers to access the Process Tempo API. External systems will then be able to send requests to this API and get results back in near real-time. For example, the sales representative may have the customer on the phone and from the same CRM screen be able to provide a real-time, contextual recommendation specific to that customer.



Summary

As you can see, the effort to stand up a Recommendation Engine can be fairly straightforward. It does not require a tremendous amount of compute power and does not require training. The results they produce can also be easier to understand and explain.


Before organizations rush to implement Large Language Models (LLMs) they should first consider the time and cost required to implement them. If the focus is to help grow revenue or improve customer reach, there is a good chance that simpler, more cost-effective approaches exist. This is likely true of Recommendations Engines which require less effort to standup and yet have the potential to produce similar results.






bottom of page