As artificial intelligence (AI) advances, researchers around the world are opting for a new trend: the use of small AI models that can run on laptops or local servers, leaving behind cloud-based solutions such as ChatGPT. This transition allows scientists to conduct their research with greater independence, data protection and lower costs.
For years, large AI models such as those from OpenAI, Google and Meta have dominated the field, giving users access to powerful tools for language processing and data analysis. However, these models often require high-performance servers and the use of cloud resources. In response, companies such as Google DeepMind, Microsoft and Meta have started to offer more compact versions of their models that can run on more affordable hardware such as personal computers and local servers.
The reasons for this shift are manifold. Firstly, researchers can save considerable costs by running models locally. Cloud-hosted models require ongoing payments for subscriptions or resource usage, which can be prohibitively expensive for academic institutions or small research teams with limited budgets.
Secondly, the ability to run models locally means more data security and privacy. In fields such as medicine and bioinformatics, where protecting personal data is crucial, using AI in local environments allows scientists to work without relying on a third party for data storage. Instead of sending sensitive information to an external server, researchers can retain full control of the data, reducing the risk of data breaches or misuse of information.
In addition, the move to smaller models has also improved the reproducibility of scientific studies. By being able to share models and run them on shared devices, other scientists can easily replicate experiments and check results without having to rely on expensive infrastructure in the cloud. This is particularly important in areas where accuracy and verification are key, such as biotechnology or new drug discovery.
Better results
Another aspect to consider is accessibility. While large AI models are powerful, they are often prohibitively expensive and technically complex for many smaller institutions or in developing countries. By making it easier to download and run models, researchers around the world have more opportunities to use the latest AI technologies, democratizing access and encouraging greater diversity of research worldwide.
For example, areas such as weather forecasting, medical image analysis and climate change research can benefit greatly from these more accessible tools. Not only can researchers process data faster, they can also tailor models to their specific needs without having to rely on the ongoing support of large technology companies.
Challenges and opportunities.
Despite the advances, the introduction of local models is not without its challenges. One of the main difficulties lies in the available hardware capacity. Although these new models are more compact, they still require computers with good processing capacity to handle the most demanding tasks. However, the rapid development of hardware with the increasing availability of specialized processing chips such as GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units) facilitates this transition.
In addition, smaller models are often limited in the efficient processing of data volumes compared to their larger counterparts. However, technology companies have worked to optimize these models without compromising their performance, making them a viable option for a wide range of applications.