Pre-Processing Puts Businesses Back In Control of Their Data

3 min readJan 23, 2024
With Conveyer’s data transformation capabilities, you never have to blindly trust a black-box AI.

By Conveyer Chief Product Officer Maxwell K. Riggsbee

Here’s the bad news: AI models hallucinate a lot more than most people realize. According to a recent New York Times report, ChatGPT fabricates at least 3% of the information it produces. Another AI model, Google’s Palm chat, makes things up an eye-watering 27% of the time.

That doesn’t make generative AI tools useless, of course. In fact, such technologies could soon drive $4.4 trillion a year in economic benefits. But unless we can find a way to minimize inaccuracies, we’ll never be able to use GenAI tools for truly high-stakes applications across our organizations.

At Conveyer, we think the answer lies less in trying to eliminate hallucinations than in rethinking the way we build business systems around AI technologies — and creating data workflows that make it possible to rapidly and reliably intercept problematic AI outputs before they cause problems in the real world.

The problem with black boxes

The trouble with ubiquitous AI is that the closer you move an AI model to the point of delivery, the harder it becomes to police its actions. If you allow a customer or end-user to ask questions of a large language model, for instance, you’re subject to the caprices of that LLM and have little or no opportunity to intervene to correct its mistakes.

That’s especially problematic because LLMs are essentially black boxes: there’s no way to look under the hood and know in advance that a given model won’t inadvertently produce inaccurate or even legally problematic content.

To put things right, and enable the rollout of high-trust AI models at scale, we need to move LLMs further away from the end user, and give ourselves a chance to minimize missteps, monitor for errors, and vet model outputs before they are pushed out to the end-user. We also need to build out effective processes for detecting potential AI errors at scale, without reducing the power of AI to drive game-changing operational benefits for both organizations and their customers.

The power of pre-processing

Conveyer’s approach does just that by turning AI data processing into a multi-step process that’s conducted before data products or AI model outputs are presented to customers or end-users:

  1. First, we add structure to datasets, creating discrete Topics using an algorithmic process that acts only on data contained in source documents, with no potential for hallucinations or the addition of incorrect data of any kind.
  2. Next, we run each Topic through a series of separate AI analytics modules, creating distinct data artifacts reflecting sentiment analysis, summaries, classifications and keywords, and Q&A pairs.
  3. Each data artifact is then automatically cross-referenced to check for any introduced errors. Statistical analysis surfaces any potential errors, and data artifacts can also be referred for manual verification.

Where conventional GenAI systems ingest huge datasets, Conveyer’s approach effectively chunks datasets into small units, or Topics, that reflect single coherent ideas. This narrows the focus for GenAI applications, making accuracy easier to attain and errors easier to spot.

To be clear, we aren’t radically changing the way that LLMs work, or reducing the inherent risk of hallucinations for any given AI model. But by carrying out AI operations and data validation as part of a pre-processing workflow, Conveyer can help organizations to ensure that any GenAI errors get surfaced before they become a problem for customers and end-users.

The result: a powerful and scalable data-processing paradigm that enables organizations to capture the benefits of GenAI and other machine learning technologies, while also minimizing their exposure to the inefficiencies and critical business risks that come with deploying inaccurate data and untrustworthy AI models.

For high-stakes data processes, or any workflow in which errors have serious consequences, robust pre-processing is a vital capability. Get in touch today, and find out how Conveyer’s pre-processing technology can make your company’s AI initiatives more dependable, more efficient, and less prone to costly errors.




Conveyer is an AI platform that unlocks and transforms unstructured data and simplifies curation for business to fuel actionable insights.