For ai, meta’s release of llama 2 is a game changer | thearticle

feature-image

Play all audios:

Loading...

ChatGPT’s phenomenal potential and utility as a productivity tool – whether supporting children’s learning or helping in day to day tasks in the office – has been a game changer in our


attitudes to AI. Suddenly, this year, like the internet of 25 years ago, it’s become a reality in our lives and something we can really see a benefit from. While we marvel at ChatGPT’s


ability to help us, many commentators in the press worry about its potential for good and the not so good, focusing on this far-off step of AI capable of  original thought or “singularity.”


The reality of our AI is as a productivity tool, not “HAL,” “ORAC,” or even “Agent Smith” — who remain the stuff of SciFi. Today’s AI is simply not this. One of the best descriptions I have


heard of it is as a “neural language calculator”, calculating outputs from data it has been trained on, at a scale we as humans could not ingest. Rather than the simple text suggestions or


auto-correct pointers we are used to, generative AI guesses using its knowledge what your next word will be. Its sophistication is the creation of sentences, paragraphs and articles that can


explain topics or provide viewpoints going so far a to do so in the style of a known author. The tech sector’s focus on AI has however honed in on opening it up. This week’s release of the


Large Language Model (LLM) Llama 2 by Meta as “open innovation” has brought the conversation to a head.  This shouldn’t be confused with open source software. Open source allows free use of


code by anyone for any purpose.  Llama’s licensing has restrictions on how it can be used, through an Acceptable Use Policy aimed at its users doing no harm. The Acceptable Use Policy sets


out parameters for use and restricts bad actions, likely along the lines we will see governments request as they rush to regulate this space. This is a critical factor in its being opened up


and one that many are missing as they (incorrectly) shout about Llama 2 being “open source”. Even with Meta’s provision of a manual and documentation to support Llama 2, most people are not


rushing to download it to build their own LLM. Speaking at The Future of Britain Conference in London this week, Vishal Sikka (founder and CEO of Vianai Systems) explained that we need many


more individuals with AI skills. Across the globe only 1.5 million people can build an AI app, only about 200,000 can automate an AI system, while fewer than 50,000 can explain correctly


how ChatGPT works.  This reality means we need more people to learn AI. Meta opening up the Llama 2 LLM helps to support this by allowing access to a LLM for technologists. It will also


allow many more individuals and companies to develop their skills and build their own products and tools. Llama 2 enables AI developers to build on it in an open and transparent ecosystem,


using the AI equivalent of GitHub — Hugging Face — to share their innovation.  This visibility means that we can see what is being done. That transparency, along with compliance with the


Acceptable Use Policy, will enable trust. For the UK in particular, access to a LLM has the potential to be a game-changer. Rishi Sunak wants the UK to be a world leader in developing AI,


but to date just seven companies in the US and China have held 90% of AI’s computing power, largely through closed LLMs which only they can access. That’s a little like the bonnet of every


vehicle driven being locked so that nobody can see the engine. If something goes wrong, there’s no transparency as to why that failure has taken place, and no ability for a local garage to


fix it. Instead, there is complete dependency on the person or company who created it and who holds the key. For the UK, that means a tech company in another nation. That was neither a safe


nor a secure position to be in. It was also a position where the key-holder had dominance. With no transparency into the key holder’s activity, how could we know if they included bad actors?


Today’s technology sector was allowed to grow like this for decades, with the tech locked away in a proprietary walled garden. This closed approach created giant tech companies with a moat


to close off competition. The power of knowledge and tools needed across the board in a digital society have been owned and controlled behind closed doors by a few. Opening the technology


up, as with Llama 2, enables this balance to be corrected. In democratising AI, society will avoid a repetition of these past mistakes. AI is undoubtedly one of the most important areas of


technology of our time and its democratisation and transparency is essential. Closed LLMs have been a primary inhibitor to those with the skills to deliver more innovation faster. This is


not a hypothesis, it’s a fact. It was proven earlier this year when the original Llama was licensed to researchers and leaked to the wider development community. Innovators around the world


created advancements using this leaked LLM at a pace unimaginable to the individual companies working in this space and which those companies recognised that they could not match. We must


not inhibit innovation, we must facilitate it. That opportunity is created by opening a LLM. Without official access to a LLM of this scale, many were officially blocked from honing their


skills and developing new AI products. It stopped competition and new market entrants. LLM access was an inhibitor as building one comes at a massive cost in terms of compute, energy and


skilled resources. Training LLMs and AI, teaching it the knowledge to answer your questions, requires vast, almost unimaginable quantities of data. The computer scientist Dame Wendy Hall


said yesterday that opening AI is “like giving people a template to build a bomb”. Has she not seen the internet? The words “bomb template” bring up an immediate response in any search


engine. Should we remove search engines? The technology horse bolted decades ago.  The world has long had access to technology which allows new market entrants to innovate, whether they are


small companies or individuals with skills, to the benefit of all. In the world of open source software, where outputs are freely licensed for re-use and modification, this supports


collaborative innovation which sits under the internet, the cloud, blockchain and of course AI. Without it, we would not have the infrastructure that we need. Open source provides the


plumbing for our digital worlds and strategies. Meta has not gone as far as fully open sourcing Llama 2, but by giving access to it, Meta is  supporting a better digital future for all. 


What we need now is access to the recipe and the data on which it is trained. We need to open up this technology further, not close it down. As Rishi Sunak approaches the autumn summit,


understanding this need and moving beyond the nay-sayers will be essential if the UK wants to lead in AI. Amanda Brock is CEO of OpenUK, which is listed by Meta as a partner and supporter in


the release of Llama, and editor of the book, _Open Source Law, Policy and Practice,_ published in 2022 by Oxford University Press, with open access sponsored by the Vietsch Foundation.


Relevant Links: A MESSAGE FROM THEARTICLE _We are the only publication that’s committed to covering every angle. We have an important contribution to make, one that’s needed now more than


ever, and we need your help to continue publishing throughout these hard economic times. So please, make a donation._