Live
- TDP, JSP, YSRCP urged to oppose Wakf Bill
- T-SAT to launch new programme on ‘General Studies’
- Cops silence deafening noise of 100 bikes
- Guv inaugurates medical screening camp for Raj Bhavan staff
- Job fair for pharmacist roles tomorrow
- New ration cards to be issued in January
- Job mela at Masab Tank tomorrow
- New toilets facilitated for MPP school students
- Steps to safeguard natural springs gain momentum
- RWAs want officials to clear fog over SCB-GHMC merger
Just In
‘Small language model’ for researchers launched
Microsoft has released its newest compact “small language model” titled Phi-2 that continues to perform at par or better than certain larger open-source Llama 2 models with less than 13 billion parameters.
New Delhi: Microsoft has released its newest compact “small language model” titled Phi-2 that continues to perform at par or better than certain larger open-source Llama 2 models with less than 13 billion parameters.
Over the past few months, the Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks.
The first model, the 1.3 billion parameter Phi-1 achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks).
We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters,” the company said in an update.
Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks.
“We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models,” said Microsoft.
The massive increase in the size of language models to hundreds of billions of parameters has unlocked a host of emerging capabilities that have redefined the landscape of natural language processing.
However, a question remains whether such emergent abilities can be achieved at a smaller scale using strategic choices for training, e.g., data selection.
“Our line of work with the Phi models aims to answer this question by training SLMs that achieve performance on par with models of much higher scale (yet still far from the frontier models),” said Microsoft.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com