Large Language Models (LLMs) are advanced artificial intelligence (AI) models that use deep learning techniques to understand human language and generate human-like responses to text-based inputs. They are designed to process and analyse a vast amount of data in order to learn patterns, detect trends and generate predictions. LLMs have the potential to revolutionise many fields, including natural language processing, machine translation, image recognition and more. Here at TITAN a European research and innovation initiative, we believe LLMs will be a crucial tool for countering online disinformation.
Some well-know examples of LLM include the general purpose ChatGPT which is probably the most well-known LLM available today. It has 175 billion parameters and can perform a wide range of processing tasks including language translation, text completion and question answering. Google's BERT LLM is used for sentiment analysis and text classification. It can understand the context of a sentence and generate accurate predictions based on the content. For other examples and more information on the different kinds of
For more on emerging language model types, we recommend the Techcrunch article on The emerging types of language models and why they matter.
LLMs and the fight against false information
In recent years, large language models have also been employed in the fight against disinformation. Disinformation, or the deliberate spread of false or misleading information has become a significant problem in the digital age. Social media platforms and online news sources make it easy for people to access and share information, but it has also led to the spread of false information at an unprecedented rate. LLMs have the potential to help combat this problem by detecting and flagging disinformation signals in real-time.
The TITAN project believes of the primary ways LLMs can be used in the fight is through leveraging Natural Language Processing (NLP) techniques. NLP is a branch of artificial intelligence that focuses on the interaction between computers and human language. With NLP, LLMs can be trained to analyse text for signs of disinformation such as sensationalism, logical fallacies, and inconsistencies.
Another approach is through enhanced fact checking, Fact-checking involves verifying the accuracy of information by cross referencing it with reputable sources. LLMs can be trained to identify false claims and flag them for fact-checking. This approach can help prevent the spread of false information and promote more accurate reporting.
LLMs can also be used to create and disseminate accurate information. By generating content that's based on accurate data and sources, LLMs can help combat disinformation by providing people with reliable information to counter false claims.
The critical thinking consideration
Whilst Large Language Models have the the potential to be powerful tools in the fight against disinformation they are not a panacea, and their effectiveness depends on how they are used and trained. in theory an LLM could generate disinformation if it is trained on a biased or inaccurate dataset, as it could present similar content when prompted with a related query. Therefore, misuse or misinterpretation of LLM generated content could potentially influence people views. If LLM generated content is presented as being completely accurate and trustworthy, without proper context and scrutiny, it could lead people to accept information without questioning its veracity.
This Forbes article - The Next Generation Of Large Language Models - highlights some LLM false information 'hallucinations" such as recommending books that don't exist or providing plausible-sounding but wrong explanations for concepts like Bayes’ Theorem.
TITAN's aim is to use LLM to support users critical thinking, rather than have people rely on its outcomes without fact checking. Our co-created (NLP) conversational agent will coach people, based on their needs, on how to perform their own logical fact-checking processes against suspected disinformation by asking relevant and pertinent questions. The 3-year project is in its first six months of operation and is currently undertaking a series of co-creation workshops across Europe to better understand people's needs when it comes to spotting and investigating false information, which will inform the design of the AI-enabled education solution.
Ultimately, despite the power of LLM's to potentially revolutionise the way we all combat false information, tools are just one element in the arsenal against disinformation. To truly combat disinformation in the digital age will take an ecosystem effort from a variety of stakeholders, including governments, tech companies and individuals.
Subscribe for TITAN updates/news at titanthinking.eu
Comments