Just Follow the Process: The Human Error

Jonas Hultenius

2024-02-20

The modern world is one overwhelming complexity resulting in the need to create documentation, processes and guiding documents for every aspect of life. The end product is layers upon layers of information and navigating through these vast, almost endless, amounts of data and documentation can be a daunting task. Human error, a persistent challenge in various industries, often stems from the inability to accurately interpret or manage large datasets and documentation.

However, with the advent of Large Language Models, or LLM, and advanced AI technologies, there’s newfound hope in mitigating human error and improving the accuracy of data interpretation.

LLMs, such as GPT-whatever-number-we-are-currently-on, have emerged as powerful tools for natural language understanding and generation. These models are trained on extensive datasets, enabling them to understand and generate human-like text across various domains. By making documentation searchable and accessible to LLMs, organizations can leverage these models to identify irregularities, mismatches, and inconsistencies in their documentation processes and governing document.

One of the primary benefits of using LLMs is their ability to analyze large volumes of text data quickly and efficiently. Traditional manual review processes often rely on human reviewers to comb through documents, which can be time-consuming and prone to error. In contrast, LLMs can process vast amounts of text in a fraction of the time, flagging potential discrepancies or inconsistencies for further review by human experts.

An organization creates a lot of governing documents, guides, checklists and other great content to facilitate everything from a correct installation various part of a nuclear reactor to the processes and steps that needs to be followed when ordering more paper for the office printers. These documents are often numerous and readily available for the staff from a central location, like a shared diskspace or document management and storage system like SharePoint.

Some documents are kept up to date and some are as old as time itself, or at least the early nineties. Each document, and there may be several thousand at this point, dictates a certain truth that at the point of writing was probably correct but unbeknownst to most that are working with them and according to them may be in complete odds with another guiding principle.

This is hard or even impossible for an individual or for that matter team to grasp. We simply cannot process all that information and hold all these slight nuances in our heads. Not to speak of all the rules and contradicting statements. This is where an LLM couple with some cleaver prompting, and general AI, excels.

Furthermore, LLMs can help uncover subtle language and sentiment nuances and patterns in textual data that may go unnoticed by human reviewers. These models excel at identifying linguistic patterns, contextual cues, and semantic relationships within documents, allowing them to detect irregularities or mismatches that may indicate errors or inconsistencies in the documentation process.

LLMs will also help us keep our documentation in line with our overall linguistic style. As an origination grows and changes with time the documentation and language therein must also change. The information in a document may still be correct but the style and ‘language’ must be adopted with the times.

Another significant advantage of leveraging LLMs is their ability to provide real-time feedback and suggestions for improvement. By integrating LLMs into documentation platforms or workflows, organizations can receive immediate insights into potential errors or inconsistencies as documents are being created or edited. This proactive approach to error detection and correction can significantly reduce the likelihood of errors slipping through the cracks and improve the overall quality of documentation.

Moreover, by making documentation searchable and accessible to LLMs, organizations can create a valuable knowledge repository that can be leveraged for various purposes beyond error detection. LLMs can assist with information retrieval, summarization, and knowledge extraction, enabling organizations to derive actionable insights and make data-driven decisions based on their documentation archives.

Let the users chat or talk to the documents and when they find errors report them back to you. In the interaction between machine and man several discrepancies will be clear as day. If the LLM suggests things that is just illogical and clearly wrong, it is probably due to contradictions and human error in the training data. So, by letting your users interact with your documents instead of trying to read the bulk of them themselves you can swiftly straighten out these issues in a matter of a few weeks. Ang getting rid of outdates instructions entirely.

However, while LLMs offer significant potential in mitigating human error in our documentation, it’s essential to recognize their limitations and potential biases. Like any AI model, LLMs are trained on datasets that may contain inherent biases or limitations, which can impact their performance and the accuracy of their predictions. Organizations must carefully consider these factors and implement appropriate safeguards to ensure the responsible and ethical use of LLMs.

The integration of LLMs with our documents and governing data represents a significant step forward in mitigating human error and improving the accuracy and reliability of textual data interpretation. By making documentation accessible to LLMs, organizations can leverage these advanced AI technologies to identify irregularities, mismatches, and inconsistencies in their documentation processes, ultimately enhancing the quality and reliability of their documentation outputs. And hopefully making us, the humans less error prone in general.