![](/wp-content/uploads/2023/06/US_1_1270-12.jpg)
Researchers have developed a logic-aware model that outperforms counterparts 500 times larger in specific language-understanding tasks without human-generated annotations. This model excels in performance while ensuring privacy and robustness, addressing concerns related to the inefficiency and privacy of large AI models.
Although Large Language Models (LLMs) have demonstrated promising abilities in generating language, art, and code, they come with high computational demands, and utilising application programming interfaces for data upload can pose risks to privacy. Smaller models have historically exhibited lesser capabilities, particularly in tasks involving multitasking and weak supervision, than their larger counterparts.
The researchers introduced the concept of “textual entailment” to aid in comprehending various language tasks by these models. In textual entailment, if one sentence (the premise) is true, then it is likely that the other sentence (the hypothesis) is also true. For instance, if the premise states “all cats have tails,” then the theory “a tabby cat has a tail” would be entailed by the premise.
The team’s previous research revealed that this approach, known as an “entailment model,” exhibited less bias than other language models. To leverage this concept, the researchers developed prompts that enable the models to determine if specific information is entailed by a given sentence or phrase across different tasks. This technique enhanced the model’s adaptability to diverse functions without requiring additional training, a phenomenon referred to as zero-shot adaptation.
In the domain of “natural language understanding,” numerous applications rely on discerning the relationship between two text pieces. For instance, in sentiment classification, the statement “I think the movie is good” can be inferred or entailed from a movie review stating, “I like the story and the acting is great,” indicating a positive sentiment. Similarly, in news classification, the topic of a news article can be inferred from its content. For example, the statement “the news article is about sports” can be entailed if the article’s main content reports on an NBA game. The researchers realised that many existing natural language understanding tasks could be reformulated as entailment tasks involving logical inference in natural language.
“Our research focuses on enhancing the capability of computer programs to comprehend and process natural language, which mimics the way humans speak and write,” explains Hongyin Luo, lead author of a new study from MIT CSAIL.
The study introduces entailment models with 350 million parameters that outperform supervised language models with 137 to 175 billion parameters without human-generated labels. This breakthrough can potentially revolutionise AI and machine learning, providing a scalable, reliable, and cost-effective solution for language modelling. Demonstrating the comparable performance of smaller models in language understanding opens avenues for sustainable and privacy-preserving AI technologies.
The model’s performance was enhanced through self-training, learning without human supervision or annotated data. This approach significantly improved results in sentiment analysis, question-answering, and news classification tasks. It surpassed Google’s LaMDA, FLAN, GPT models, and other supervised algorithms in zero-shot capabilities.
The research addresses the challenge of self-training in language models by developing a novel algorithm called ‘SimPLE’ (Simple Pseudo-Label Editing). By reviewing and modifying the initially generated pseudo-labels, the algorithm improves the overall quality of self-generated labels. CSAIL Senior Research Scientist James Glass emphasises that this study introduces an efficient approach for training large language models (LLMs) by framing language understanding tasks as contextual entailment problems and employing a self-training mechanism with pseudo-labelling. It enables the incorporation of substantial amounts of unlabeled text data during training.
“This study demonstrates the feasibility of developing relatively compact language models that excel in benchmark language understanding tasks when compared to models of similar or even larger sizes,” he concludes.