Is Artificial Intelligence sustainable?

Thursday, November 23, 2023

On November 30, 2022, the startup OpenAI made ChatGPT , an advanced Artificial Intelligence language model, available to the public . This constituted a milestone and at the same time a revolution that began with the growing wave of Social Networks in the first decade of 2000. Initially, it managed to reach more than 1 million users in five days.


Thus, a first generation of systems based on Artificial Intelligence was born. They began to spread to the creation of images by text and countless applications and services. Today, ChatGPT competes with other chatbots such as Claude from Anthropic, LlaMa 2 from Meta or Bard from Google.


Of those first million users, as of August 2023, ChatGPT had 100 million active users, and received more than 10 million queries per day. However, have you asked yourself what is required to keep a platform of this nature running? Have you reflected on the amount of resources it consumes? Is Artificial Intelligence sustainable, as we know it today? We are going to know in general terms everything that these chatbots require to be created, to work, what this implies and the perspectives of this technology for the future.

Definition and historical review of the development of Artificial Intelligence

Artificial Intelligence (AI) is a field of computer science that focuses on creating systems that can perform tasks that require human intelligence, such as learning, perception, reasoning, and decision-making. John McCarthy coined the term “Artificial Intelligence” in 1956.


Since then, AI has seen rapid growth and reached several important milestones:


• In 1959, Arthur Samuel developed a chess program that could learn from its own mistakes and improve its game over time.


• In 1961, James Slagle developed a program that could solve linear algebra problems.


• Dendral program was able to identify the molecular structure of unknown chemical compounds.


• In 1979, the MYCIN program was able to diagnose infectious diseases with accuracy similar to that of human doctors.


• In 1997, IBM's Deep Blue program defeated world chess champion Garry Kasparov.


• In 2011, IBM's Watson program defeated the top human players on the television game show Jeopardy.


During the period from 1979 to 1997, AI experienced a period of stagnation known as the “AI winter.” This period was characterized by a decline in funding and public interest in AI, due in part to the lack of significant advances in the field, and the inability of AI systems to meet expectations. Additionally, some of the AI systems developed during this period were very expensive and required a large amount of computational resources, limiting their accessibility and usefulness.


However, starting in the 1990s, AI experienced a renaissance thanks to advances in machine learning and data mining, as well as the availability of large amounts of data and computational resources. Since then, AI has seen rapid growth.

Infrastructure requirements for Systems Based on Artificial Intelligence

HARDWARE


Infrastructure requirements are very varied, and depend on the type of system being developed. In general, AI systems require a large amount of computational resources, such as high-speed processors, RAM, and data storage. Between them we have:


• Central processing units (CPUs): CPUs are the most common chips in AI systems and are used to perform general calculations. Current CPUs typically have multiple cores and can handle multiple tasks simultaneously.


• Graphics processing units (GPUs): GPUs are specialized chips used to accelerate graphics processing and intensive mathematical calculations. GPUs are particularly well suited for AI and Machine Learning (ML) workloads as they can handle large amounts of data and perform complex calculations quickly.


• Field Programmable Gate Arrays (FPGAs): FPGAs are programmable chips that can be reconfigured to perform specific tasks. FPGAs are particularly useful for AI and ML applications that require a large amount of parallel calculations.


• Application Specific Integrated Circuits (ASICs): ASICs are chips designed specifically to perform a specific task or set of tasks. ASICs are particularly useful for AI and ML applications that require many specialized calculations.


SOFTWARE


In addition to the above, specialized software tools are needed for the development and training of AI models, such as machine learning libraries and AI frameworks. The programming languages most used in the development of Artificial Intelligence (AI) are Python, C++, R, Java, Scala, Julia, MOHO and Ceceo.


• Python is the most popular programming language for AI due to its ease of use, simple syntax, and a wide variety of available AI libraries such as TensorFlow , Keras , PyTorch , Scikit-learn, and Pandas.


• C++ is another popular language for AI due to its performance and efficiency, and is often used for the development of AI libraries and high-performance AI systems.


• R is a specialized language for analytics and is often used for statistical analysis and data visualization.


• Java is a versatile and scalable language that is often used for the development of enterprise applications and large-scale AI systems.


• Scala is a functional programming language often used for the development of distributed AI systems.


• Julia is a high-performance programming language that is often used for the development of AI systems that require this feature.


• MOHO and Ceceo are AI programming languages that are often used for the development of deep learning AI systems.


On the other hand, the most popular AI frameworks and libraries are: TensorFlow , Keras , PyTorch , Scikit-learn, Pandas, Theano , Caffe , Torch , MXNet and H2O. These frameworks and libraries provide tools for the development and training of AI models, such as neural networks, decision trees, and machine learning algorithms.


HUMAN RESOURCES


The development of AI requires a group of experts with great skills and knowledge. These come from a wide variety of disciplines, such as computer science, statistics, mathematics, psychology, philosophy, neuroscience and linguistics. Each group of experts brings a unique perspective to the development of AI and plays an important role in advancing the field.


• Computer science experts are responsible for developing AI algorithms and systems, as well as creating specialized software tools for developing and training AI models.


• Statistics and mathematics experts are responsible for developing mathematical and statistical models used in AI, such as neural networks and decision trees.


• Experts in psychology and neuroscience provide insights into how the human brain works and cognition, which can help improve AI models.


• Philosophy and ethics experts provide insight into ethical and social issues related to AI, such as privacy, security, and fairness.


• Linguistics experts provide knowledge about natural language processing, which is essential for the development of AI systems based on this.

Resources or inputs required and their environmental impact according to the increase in demand

The environmental impact will vary depending on the increase in demand. The increase in demand for electrical energy and water can have a negative impact on the environment, as it can increase the emission of greenhouse gases and water pollution.


We can see an example in the generation of heat in the equipment during its operation. This requires providing the platform with cooling mechanisms. Therefore, large quantities of fresh water are consumed, which can lead to accelerated depletion of environmental resources.


According to estimates, Google data centers consumed 12.7 billion liters of fresh water for cooling in 2021, while Microsoft's GPT-3 training center used around 700,000 liters of fresh water. These are the conclusions obtained in some studies carried out on the subject:


• According to a UNESCO article, AI can have both positive and negative effects on the environment. On the one hand, AI improves efficiency and productivity in sectors such as health and climate science. In contrast, certain AI systems consume a substantial amount of energy and amplify industries that already harm the environment, raising ethical and environmental dilemmas.


• An article in Milenio mentions that a large amount of materials are needed to manufacture the electronic devices that support the digital infrastructure necessary for AI, which generates greater demand and shortages of precious metals and rare earths. Additionally, training AI models requires a large amount of energy, which can have a significant impact on the environment.


• The Massachusetts Institute of Technology's report, “Energy and Policy Considerations for Deep Learning in NLP,” recommends a concerted effort by industry and academia to promote research into more computationally efficient algorithms and less power-hungry hardware. Providing easy-to-use APIs that implement more alternatives that are efficient is another avenue through which machine learning software developers could help reduce the energy associated with model tuning.

Platform costs and ROI

Shifting costs to users can have significant implications in terms of accessibility and equity. If costs are too high, it can limit access to AI technology to those who cannot afford it, which can perpetuate the digital divide. As for the return on investment, this depends on the type of AI system being developed and its purpose.


On resource optimization for the OpenAI model, Microsoft's Azure VP Eric Boyd said, “There are still open questions about whether such a large and complex model could be deployed cost-effectively at scale to meet real-world business needs".


The risk of costs exceeding the company's revenues and financing, particularly due to the company's limited profit structure, is real. If investors come to perceive the low probability of obtaining limited profits, their interest in financing the company may decrease.

Possible alternatives to reduce the consumption of required environmental resources

Several options can be considered to reduce the consumption of the required environmental resources. First, renewable energy sources, such as solar and wind energy, can be used to reduce the carbon footprint of AI systems.


Likewise, resource optimization techniques, such as model compression, data precision reduction, and the development of “Green Algorithms,” can be used to reduce energy and memory consumption.

Model compression is a set of techniques that aim to reduce the memory footprint and computational requirements of AI models. These techniques can be broadly classified into two categories: model pruning and model quantization.


• Model pruning involves removing redundant or unimportant neural connections in a neural network, reducing the size of the model and improving its efficiency.


• Quantization, on the other hand, involves reducing the precision of the data used in a model, which reduces the amount of memory and power needed to store and process the data.


So by reducing the memory footprint and computational requirements of AI models, you can reduce the amount of energy and resources needed to train and run the models. Furthermore, model compression can also improve the efficiency and performance of AI models, which can have a positive impact on the accessibility and usefulness of AI technology.


Data precision reduction is a technique used to reduce the number of bits required to represent the data used in an AI model. This technique is often used in AI systems that require a large amount of memory and power to store and process data, such as image and video processing AI systems.


This technique reduces the amount of memory and power needed to store and process data. For example, if you use 16-bit precision instead of 32-bit precision to represent a number, you can reduce the amount of memory required to store the number by half.


Reducing data precision is another possible alternative to reducing the consumption of environmental resources required by AI systems. By reducing the amount of memory and energy required to store and process the data used in AI models, you can reduce the amount of energy and resources required to train and run the models.


For their part, “Green Algorithms” constitute another technique that can be used to reduce the amount of energy and resources necessary for the development of Artificial Intelligence (AI). These algorithms are designed with the aim of using AI that is more inclusive and respectful of the planet. The design of these algorithms, in addition to an environmental necessity, is already considered a requirement for compliance with the Sustainable Development Goals set by the 2030 Agenda.


“Green Algorithms” seek to achieve less polluting AI, capable of reducing the carbon footprint with algorithms that consume less energy and are more efficient. To achieve green algorithms, an environmental commitment and greater economic resources are needed. This is something that not all laboratories or technology centers can assume. To have efficient models, many experiments are needed, and this entails an energy and economic cost that not everyone can afford.


In Spain, the National Green Algorithms Plan establishes a set of voluntary good practices for the certification of sustainable hardware and software services companies. The plan also establishes standards and tools to measure the energy consumption of algorithms, in order to raise awareness among AI developers about the environmental impact of their decisions.


Voluntary good practices for the certification of sustainable hardware and software services companies include, but are not limited to, the following:


• Energy efficiency: Companies can improve the energy efficiency of their AI systems by using more efficient hardware and software, as well as by implementing energy optimization techniques, such as model compression and reducing the precision of data.


• Use of renewable energy: Companies can reduce their carbon footprint by using renewable energy sources, such as solar and wind energy, to power their AI systems.


• Recycling and reuse: Companies can reduce their environmental impact by recycling and reusing hardware and software components.


• Sustainable design: Companies can design their AI systems to be more sustainable, for example by using recycled materials and reducing the size and weight of devices.


• Transparency and accountability: Companies can improve the transparency and accountability of their AI systems by implementing data governance practices and adopting ethical and privacy standards.


References:


https://portal.mineco.gob.es/es-es/comunicacion/Paginas/algoritmos-verdes.aspx


https://revista.aenor.com/340/las-9-certificacións-para-la-transformacion-digital.html


https://www.lamoncloa.gob.es/serviciosdeprensa/notasprensa/asuntos-economicos/Paginas/2021/151021-algoritmos-verdes.aspx


https://dataladder.com/es/que-es-la-exactitud-de-los-datos-por-que-es-importante-y-como-CAN-las-empresas-asegurarse-de-tener-datos- exact/


https://ts2.space/es/ia-y-compression-de-models/


https://www.cryptopolitan.com/es/es-el-impacto-ambiental/


https://www.milenio.com/opinion/varios-autores/ciencia-tecnologia/el-impacto-ambiental-de-la-ia-y-la-computacion-del-futuro


https://digital-strategy.ec.europa.eu/es/policies/expert-group-ai


https://research.aimultiple.com/ai-chip-makers/


https://research.contrary.com/reports/openai


https://www.algotive.ai/es-mx/blog/historia-de-la-inteligencia-artificial-el-machine-learning-y-el-deep-learning


https://miinteligenciaartificial.com/historia-de-la-inteligencia-artificial/


https://www.sostenibilidad.com/desarrollo-sostenible/la-alianza-entre-inteligencia-artificial-y-desarrollo-sostenible/


https://observatorio-ametic.ai/inteligencia-artificial-en-sostenibilidad/inteligencia-artificial-en-sostenibilidad

https://aulainsitu.com/inteligencia-artificial/


Suscríbete ahora.

Suscríbete a nuestra newsletter e indícanos cuál es tu necesidad como emprendedor. También puedes enviarnos tus dudas o sugerencias. ¡Son muy importantes para nosotros!

NUESTRO OBJETIVO

Estamos aquí para brindar información de interés y apoyar a los nuevos emprendedores.