AI – What is artificial intelligence and what is it used for? Many people have come across the word AI which stands for Artificial Intelligence and are wondering what it is all about. If you happen to be one of those, then let’s give you a little insight into what AI is all about.
The latest technological creations lead us to reflect on the direction the world is taking. Indeed, for some time now, the technical-scientific discipline has been proposing a great world revolution: artificial intelligence (AI).
Although there is no exact definition of what it means, artificial intelligence is the name given to a number of technologies with characteristics or capabilities that were previously unique to the human intellect.
The term is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning or problem-solving, etc.
What is Artificial Intelligence? History and origin
In 1956, scientists Allen Newell, Herbert Simon, Marvin Minsky, Arthur Samuel, and John McCarthy met at the Dartmouth Conference in a meeting that marked the creation of the discipline of artificial intelligence. They agreed that it would be easy to equip machines with the ability to think.
Now, if we go back to the Greeks, the basic ideas about artificial intelligence lead us to Aristotle, who was the first to describe a set of rules that detail some of the workings of the mind to reach rational conclusions. Sometime later, Ctesibious of Alexandria came to build the first rationally self-controlled machine, but without reasoning.
In the late 1950s and early 1960s, artificial intelligence had one of its best eras, as machines managed to play checkers better than many human beings, “learn” English and to solve algebraic and logical problems.
Later, between the years 1968-1970, the professor of computer science at Stanford University, Terry Winograd, developed the SHRDLU system, which made it possible to interrogate and give orders to a robot which moved in a world of blocks.
Already in the new century and after significant technological advances, the multinational IBM developed a supercomputer called Watson, which won three times the game of Jeopardy (television knowledge competition) to two of its greatest champions.
Today, artificial intelligence has not only revolutionized the business world but also the social sphere, with applications ranging from the rapid detection of cancer to the fight against deforestation in the Amazon.
What are the categories of artificial intelligence?
Stuart Russell and Peter Norvig, in their book Artificial Intelligence: A Modern Approach, differentiate between four types of artificial intelligence.
- Systems that think like humans: are systems that attempt to mimic human thinking such as decision-making, problem-solving, and learning.
- Systems that act like humans: they try to act like humans. That is, they mimic human behaviour. An example of this system is robotics.
- Systems that think rationally: try to imitate the rational logical thinking of human beings; For example, the study of the calculations that allow us to perceive, reason and act.
- Systems that act rationally: this system tries to imitate human behaviour rationally. It is related to smart behaviours in artefacts.
What are conventional and computational artificial intelligence?
Conventional artificial intelligence, known as symbolic-deductive AI, is based on the formal and statistical analysis of human behaviour when faced with different problems. It helps in making decisions while solving some specific problems and requires proper functioning.
It facilitates complex decision-making and offers a solution to a certain problem. This intelligence also contains autonomy and can self-regulate and control itself for the better.
Meanwhile, computational artificial intelligence, known as sub-symbolic-inductive AI, involves interactive development or learning. This learning is based on empirical data.
How does artificial intelligence work?
Artificial intelligence is developed from algorithms. These are mathematical learning capabilities, and among the data needed to train these algorithms are observable data, publicly available or data generated in certain companies, the same ones that repeat the process to learn from them.
Some Educational resources for you
- Programme des Examens de Rattrapage de fin de 1er Semestre à l’Université de Yaoundé II (SOA) 2022/2023
- Concours ENA 2023 au Congo
- Comment imprimer ses Convocations aux épreuves de présélection du concours CREM 2023?
- Les inscriptions aux examens du BEPC et BAC1 2023 Togo désormais en ligne
- 3ème appel à candidature pour le programme de formation de 1000 PHD et 5000 Masters formatuers en Guinée 2023
- Résultats Concours de recrutement de 100 auditeurs de justice au Bénin 2022-2023
- Calendrier des Examens à Université de Yaounde I session 2022-2023
- Université Norbert Zongo: Résultats de Master en Psychologie, Histoire, Lettres Modernes et Géographie 2022-2023
- OBC: Nouvelle Grille Tarifaire des Services Offerts par l’Office du Baccalauréat de Cameroun
- Nomination Au Ministère Des Affaires Étrangères Des Tchadiens De L’étranger Et De La Coopération Internationale
What is artificial intelligence used for? Terrain and real-world applications
Artificial intelligence has been used in a large number of fields such as robotics, language understanding and translation, word learning, etc.
The main and most notable areas where we can find a notorious evolution of artificial intelligence are:
What are the risks associated with artificial intelligence?
While in certain aspects of life, the presence of artificial intelligence has many advantages, some experts believe that it can generate new risks.
The financial market is the most vulnerable because the ability of computers to process huge amounts of data can empower those who control it and allow them to dominate finance on an international scale.
Another issue is the lack of global regulation.
But perhaps the risk that worries the most and can generate many problems is the loss of jobs. A study published in 2015 in China reported that almost 50% of the professions existing today will be completely redundant by 2025 if artificial intelligence continues to transform businesses as it already is.
Considering this, experts have started to visualize each of the uses of artificial intelligence what the limitations are or how they should be addressed to ensure the continued protection of human beings.
Course to learn about artificial intelligence
Machine Learning/Coursera (Advanced): This is the journey of Andrew Ng, one of the fathers of neural networks. The course gives an introduction to linear algebra and statistics. It also explains the different steps in creating a machine-learning model.
Additionally, he offers several courses on artificial intelligence in which he helps his students understand how it has influenced fields such as medicine, personalized education, and self-driving cars.
Introduction to artificial intelligence/Udacity (intermediate): This intermediate-level course aims to teach those interested in the basics of modern artificial intelligence, as well as some of its main applications in probabilistic reasoning, robotics and natural language processing.
Deep Learning/Google (Avancé)
A course that aims to show how to optimize basic neural networks, convolutional neural networks and long-term memory networks, as well as complete learning systems in TensorFlow.
Artificial Intelligence/MIT (Basic): This course was taught face-to-face at the Massachusetts Institute of Technology (MIT) in 2010. It is available on the university’s course platform and contains videos, support materials, and exercises of various types.
Artificial Intelligence/Stanford (various difficulties): Stanford engineering professors offered online courses in their most popular computer science school subjects. During the course, various tests and two exams are applied which accredit students who pass it online with a letter of completion.