InternetTech NewsWhat's New

Mark Zuckerberg unveils LLaMA, Meta’s new AI language model

Lately, language model development is becoming the talk of the day and Meta isn’t keeping silent either. The company’s co-founder Mark Zuckerberg on Friday, February 24, 2024, unveils LLaMA, Meta’s new AI language model. LLaMA is a foundational, 65-billion-parameter large language model. Those interested can apply for access by filling the necessary information here.

For the past few days, we have seen new models developed and deployed by the likes of Microsoft, Google, and OpenAI.

The model, developed by Meta’s FAIR (Fundamental AI Research) team, is intended to help scientists and engineers explore AI applications and functions such as answering questions and summarizing documents.

The release of LLaMA comes as technology companies strive to promote advancements in AI techniques and integrate the technology into their commercial products. As CNBC notes, Meta’s version stands out from competing models because it will come in a selection of sizes, from 7 billion parameters to 65 billion parameters.

LLaMA isn’t like ChatGPT or Bing; it’s not a system that anyone can talk to. Rather, it’s a research tool that Meta says it’s sharing in the hope of “democratizing access in this important, fast-changing field.” In other words: to help experts tease out the problems of AI language models, from bias and toxicity to their tendency to simply make up information.

To this end, Meta is releasing LLaMA (which is not actually a single system but a quartet of different-sized models) under “a noncommercial license focused on research use cases,” with access granted to groups like universities, NGOs, and industry labs.

Meta believes that unlike more finely tuned models designed for specific purposes, theirs will prove versatile, with multiple use cases.

Another way LLaMA is different, according to Meta: it requires “much less” computing power than previous offerings and is trained in 20 languages, focusing on those based on the Latin and Cyrillic alphabets.

With its 13 billion parameters, LLaMA should outperform GPT-3, the model on which ChatGPT is built. Meta also attributed LLaMA’s performance to “cleaner” data and “architectural improvements” in the model that improved training stability.

To maintain the integrity of the model and prevent abuse, Meta will release it under a non-commercial license focused on research use cases. Academic researchers, government, civil society, academic institutions, and industry research labs will be granted access to the model on a case-by-case basis.

Meta’s launch of LLaMA could mark a major development in AI language models. The social media giant’s commitment to open science and allowing researchers to study under a non-commercial license will limit misuse of the model.

The versatility and problem-solving potential of LLaMA can provide insight into the substantial potential benefits of AI for billions of people at scale.

Back to top button

Adblock Detected!

Hello, we detected you are using an Adblocker to access this website. We do display some Ads to make the revenue required to keep this site running. please disable your Adblock to continue.