How LLMs like ChatGPT impact Causaly’s approach to biomedical knowledge discovery and acquisition
by Artur Saudabayev
At Causaly, we stay close to the developments in the foundational natural language processing technologies and broader AI to understand where we stand with both academic and industrial state-of-the-art. We are always seeking ways in which we can reinvent ourselves to improve our existing technology and build new capabilities better and faster.
Recent developments in Generative Large Language Models (LLMs), particularly ChatGPT and the integration of LLMs into search, are a breakthrough that will transform what’s possible in a number of domains. Many people are wondering how this technology can become part of their professional lives: can it automate and do certain tasks better than humans, can it deliver better performance than existing tools, can it augment and empower people to do their jobs better, and more? It’s not easy to navigate the current media environment to seek systematic answers to these questions, so we compiled a quick guide for Causaly users to differentiate between Causaly and LLMs-enabled conversational and search products.
LLMs and technologies based on them, like ChatGPT and Google Bard, are at their core next token prediction systems that aim to provide the most plausible “answer” to a user query. They are not designed for a comprehensive information extraction or retrieval tasks. There are a number of limitations associated with such technology, including bias, limited ability to learn from events in the data, hallucinations, factuality and transparency. As opposed to Causaly, which is designed to mitigate these limitations, LLMs are not fit for purpose for unbiased biomedical knowledge discovery and acquisition.
They are, on the other hand, very powerful language modelling systems that are setting new performance benchmarks on a number of natural language processing tasks. They will certainly commoditize certain problem solving and transform what’s possible for foundational natural language processing technology. The power of LLMs in predicting text and in Question Answering is remarkable; these models furthermore already demonstrate the ability to follow analogical reasoning, follow instructions and grounding (“understanding”) of certain human concepts like table, list or summary. Further research directions in the area (e.g. Retrieval Augmented Generation) are aimed at addressing a number of LLM limitations to expand the range of its use cases and we are closely following these steps as part of our R&D effort.
Learn more about Causaly request a demo!