Buscar
  • Catherine Munoz

The remarkable evolution of Artificial Intelligence.Artificial intelligence's history is as remarka




Artificial intelligence’s history is as remarkable as the current result thereof. First, this history starts in a very uncommon way in the mid-19th century, with a woman named Ada Augusta Byron, also known as Ada Lovelace, the daughter of the well-known poet Lord Byron. Ada Lovelace, was an English woman, whose fascination for mathematics led her to meet several mechanical inventions of her time face to face, thus having a fresh view of them from a revolutionary perspective.[1]

Ada Lovelace was a collaborator of the scientist and mathematician Charles Babbage in the direction, sense and aim of her analytical engine[2], which was finally never put into practice. As a result of this work, she is considered as the first creator of an algorithm as such, and she has been called the first “programmer” of computer systems.[3]

In 1843 and as part of her work as an assistant, Ada translated a paper of Luigi Menabrea, a French scientist, who wrote about Babbage’s analytical engine. Along with said translation, she prepared extensive notes that incorporated an analysis of the functional nature of said machine by her own. Finally, said translation was pushed into the background, with the aforementioned additional notes becoming the main character of a remarkable anticipated understanding on the basis of computer systems and the artificial intelligence from a mathematical and philosophical perspective, opening a number of possibilities for a machine exclusively invented for numerical calculations.[4]

Ultimately, what Ada was discerning was the capacity a machine could have for processing in a consistent, logical way not only numbers, but also another kind of abstract data, such as musical notes.

It is also said that Ada was the first person in writing an algorithm for the note individualized as G [5] in the same document in reference, detailing a logical sequence that allows calculating Bernoulli numbers.[6]

From Ada Lovelace’s deep analyses almost 100 years elapsed until one of the most important scientists of XX century, Alan Turing, would be able to empirically develop the basis of scientific and logical foundations that made the progress of computer science and AI possible.[7] As everything in this story, Turing was a remarkable and advanced man for his time.

In a paper published in 1937[8] called “On computable numbers, with an application to the Entscheidungsproblem”[9], Turing gave new definitions on computable numbers and a description of a universal machine, finally explaining that Entscheidungsproblem cannot be solved. Like Ada Lovelace’s analysis, in this paper and other later one, Turing combines logics with philosophy and mathematics in order to give rise to his well-known theories.

“Calculator” at that time was the denomination given to those persons whose work was the performance of mathematical works. Turing based his machine on said function and as explained by Nils J. Nilsson[10] its operation was very simple, being made of few parts, where an infinite tape divided in cells can be found, which have a 1 or 0 printed. The machine had a reading and writing head, and a logical unit that can change its own state in order to command the writing or reading function of 1 or 0, and move the tape to the left or to the right by reading a new cell or ending an operation

In principle, each machine had an unique function; then Turing created a form in which multiple functions could be coded in the same input data, thus creating the so-called Universal Turing Machine, which is the basis of current computers.

With the advent of the Second World War, this invention and its creator would continue developing the aspect of intelligence and national security. Thus, Turing was recruited by the Government Code and the Cypher School based in Bletchley Park, making important contributions in the field of codebreaking.

Then, in 1950 Turing published in Mind magazine his paper called “Computing Machinery and Intelligence”.[11]The paper started by saying the following: “I intend to review the following question: Can machines think?”.

In that paper, Turing created the so-called “Turing Test”, which purpose was to determine if a computer could “think”. In simple terms, the test required a machine to hold a conversation through a text with a human being. If After five minutes, the human being was convinced of being talking to another human being, it was said that the machine had passed the test. In John McCarthy’s words: He argued (Turing) that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent.[12]

Alan Turing Statue at Bletchley Park (C) Gerald Massey The statue, commissioned from Stephen Kettle by the late Sidney E Frank, depicts Alan Turing seated in front of an…www.geograph.org.uk

Until now we have seen the development of AI from the concept of mathematics and logics, but after that, two well defined approaches became clearly distinct that based their studies and development on AI. First, we have those basing their knowledge on logics and mathematics with philosophical and epistemological resources corresponding to the symbolic or classical AI, and, on the other hand we have those basing their studies on the biology known as connectionist AI — cybernetics.

Cybernetics was defined by Wiener as “the science of control and communication, in the animal and the machine”.[13]

The preceding definition was taken from one of the works of W. Ross Ashby, an English psychiatrist who published a theory of the adaptive behavior in animals and machines.[14]

Parallel to symbolic or classical AI development, the neurologist Warren S. McCulloch and the logician Walter Pitts published in 1943 a paper called “A logical calculus of the ideas immanent in nervous activity”[15] where they proposed an artificial neurologic model from an explanation about how the human nervous system operates.

According to Nils J. Nilsson[16], the McCulloch-Pitts neuron corresponds to a mathematical abstraction of a brain cell as such with an input corresponding to dendrites and an output corresponding to a value of 0 or 1 as a simile of an axion, with these “neurons” being able to connect each other forming networks. In this respect, Nilsson says: “Some neural elements are excitatory — their outputs contribute to “firing” any neural elements to which they are connected. Others are inhibitory — their outputs contribute to inhibiting the firing of neural elements to which they are connected. If the sum of the excitatory inputs less the sum of the inhibitory inputs impinging on a neural element is greater than a certain “threshold,” that neural element fires, sending its output of 1 to all of the neural elements to which it is connected”.[17]

In sum, we have elements of mathematics, logics and biology that have been present in the development of AI, to which also psychology and cognitive sciences are added, since they have been of the essence in the development of learning of artificial neurons.

Psychologist Frank Rosenblatt developed the Perceptron in 1957, which corresponded to the first artificial neuronal network for learning that allows the recognition of patterns based on a binary classifier.[18] Then David Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams describe the retropropagation as a method of optimizing multi-stage dynamic.

In 1956 at the 1956 Dartmouth Conference organized by Marvin Minsky, John McCarthy, the expression “Artificial Intelligence” was first coined. From this conference began a new era in the development of AI.[19]

Government agencies, such as the U.S. Defense and Research Projects Agency (DARPA), invested heavily in Artificial Intelligence innovation during World War II and the Cold War. Then, between 1974 and 1980, the so-called “AI Winter” took place, with the reduction of the interest and the financing of this type of technologies.

However, there was an explosive growth in the early 1990s in AI technology advances driven mainly by the rise of Big Data and supercomputers.

Machine Learning.

The development of Machine Learning has been present since long time ago. In 1959, Arthur Lee Samuels coined the concept of Machine learning for the first time when designing a program that automatically was able to learn the parameters of a function in order to evaluate the position in the checkers game board.[20]Its machine learning approach was explained in a paper published in the IBM Journal of Research and Development in 1959.[21]

The machine Learning as indicated in other post has multiple uses, from early disease forecast, marketing and business probabilities, advanced facial recognition, and multiple predictions of human behavior and environment.

As mentioned by Nilsson, Machine learning is a concept referring to the “changes” performed by an AI system. The “changes can be of recognition, diagnosis, planning, control of robots, prediction, among others, these being improvements to the same systems or a “synthesis ab initio of new systems”.[22]

According Andrew Ng definition [23] “Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed”.

The learning systems mimics the human brain from data through the experience of certain tasks, which performance is measured. The result of the performance changes the system in order to find the best model for the realization of a certain task with accuracy. The tasks can be varied, such as classification, synthesis and sampling or probability.

Some machine learning uses symbolic or classical AI and others employs neural networks.

Deep Learning.

Deep learning is an area of machine learning that emerged from the intersection of neural networks, artificial intelligence, graphical modeling, optimization, pattern recognition and signal processing.[24]

According to the Massachusetts Institute of Technology, Deep learning can be defined as follows:[25]

“Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep”.

The Deep learning technology is behind the current explosion of AI and has given extraordinary powers to computers, such as the ability to recognize spoken words almost as well as a person does, that is, a very complex ability to be hand coded in the machine. Deep learning has transformed computer vision and dramatically improved machine translation. It is now used to guide any key decisions in medicine, finance, manufacturing and beyond.

The essential point where the importance of Deep Learning lies is that the system of knowledge representation is not at one level, but can be at multiple levels, which leads to data processing at a level of abstraction beyond human comprehension.[26]

Robotics.

Robotics can be defined as “the branch of technology that deals with the design, construction, operation, and application of robots”.[27]

Robots can be defined as “a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer”.[28]

Robots do not necessarily contain artificial intelligence. But there are those who are endowed with artificial intelligence, particularly with Deep learning, developing surprising and revolutionary technologies.[29]

[1] Luigia Carlucci Aiello, “The Multifaceted Impact Of Ada Lovelace In The Digital Age”, Artificial Intelligence 235 (2016): 58–62, doi:10.1016/j.artint.2016.02.003.

[2] Margaret A Boden, AI: Its Nature And Future, 1st ed. Oxford: Oxford University Press, 2016. 1–28

[3]Suw Charman-Anderson “Ada Lovelace: Victorian Computing Visionary.” Ada User Journal 36.1 (2015).

[4] J.G. Llaurado, “ADA, The Enchantress Of Numbers”, International Journal Of Bio-Medical Computing 32, no. 1 (1993): 79–80, doi:10.1016/0020–7101(93)90008-t.

[5] L. F. Menabrea & A. Lovelace (1842). “Sketch of the analytical engine invented by Charles Babbage”. 49–59

[6] Ronald L. Graham, et al. “Concrete mathematics: a foundation for computer science.” Computers in Physics 3.5 (1989): 106–107.

[7] Margaret A. Boden, AI: Its Nature And Future, 1st ed. Oxford: Oxford University Press, 2016. 1–28

[8] A. M. Turing, “On Computable Numbers, With An Application To The Entscheidungsproblem”, Proceedings Of The London Mathematical Society 2–42, no. 1 (1937): 230–265, doi:10.1112/plms/s2–42.1.230.

[9] The Entscheidungsproblem (mathematics, logic) is a decision problem, of finding a way to decide whether a formula is true or provable within a given system. “Entscheidungsproblem Dictionary Definition | Entscheidungsproblem Defined”, Yourdictionary.Com, accessed 14 March 2019, https://www.yourdictionary.com/entscheidungsproblem.

[10] Nils Nilsson. The quest for artificial intelligence. Cambridge University Press, 2009. E-book. 57

[11] A. M. Turing, “I. — Computing Machinery And Intelligence”, Mind no. 236 (1950): 433–460, doi:10.1093/mind/lix.236.433.

[12] John McCarthy, “What Is Artificial Intelligence?”, Stanford University, 1998, http://www-formal.stanford.edu/jmc/whatisai/whatisai.html.

[13] W. Ross Ashby and J. R. Pierce, “An Introduction To Cybernetics”, Physics Today 10, no. 7 (1957): 34–36, doi:10.1063/1.3060436.

[14] W. Ross Ashby. “Principles Of The Self-Organizing Dynamic System”, The Journal Of General Psychology 37, no. 2 (1947): 125–128, doi:10.1080/00221309.1947.9918144.

[15] Warren S. McCulloch and Walter Pitts. “A Logical Calculus Of The Ideas Immanent In Nervous Activity”, Bulletin Of Mathematical Biology 52, no. 1–2 (1990): 99–115, doi:10.1016/s0092–8240(05)80006–0.

[16] Nils J. Nilsson. The quest for artificial intelligence. Cambridge University Press, 2009. E-book. 34- 35

[17] Ibid.

[18] S.I. Gallant, “Perceptron-Based Learning Algorithms”, IEEE Transactions on Neural Networks 1, no. 2 (1990): 179–191, doi:10.1109/72.80230.

[19] James Moor. “The Dartmouth College artificial intelligence conference: The next fifty years.” Ai Magazine 27.4 (2006).87.

[20] John McCarthy and Ed Feigenbaum, “Arthur L. Samuel: Pioneer In Machine Learning”, ICGA Journal 14, no. 1 (1991): 19–20, doi:10.3233/icg-1991–14105.

[21] Ibid.

[22] Nils J Nilsson. Introduction to machine learning: An early draft of a proposed textbook. (1996). E-book. 1

[23] Ibid.

[24] Li Deng, “Deep Learning: Methods and Applications”, Foundations And Trends® In Signal Processing 7, no. 3–4 (2014): 197–202, doi:10.1561/2000000039.

[25] Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning (MIT Press, 2016). Summary https://mitpress.mit.edu/books/deep-learning.

[26] Ibid.

[27] “Robotics | Definition Of Robotics In English By Oxford Dictionaries”, Oxford Dictionaries | English, last modified 2016, accessed March 1, 2019, https://en.oxforddictionaries.com/definition/robotics.

[28] “Robot | Definition Of Robot In English By Oxford Dictionaries”, Oxford Dictionaries | English, last modified 2017, accessed March 1, 2019, https://en.oxforddictionaries.com/definition/robot.

[29] According to Robotics Specialists Association, there are four main areas where artificial intelligence, and in particular machine learning, are impacted by the development of robotics applications. The first is the vision and detection of objects, the second is the improvement of an accurate grasping, the third is motion control, machine learning helps robots interact dynamically and avoid obstacles to maintain productivity and finally in the field of data where machine learning helps robots to understand physical and logistical data patterns becoming more efficient applications. See Robotic Association, “Applying Artificial Intelligence And Machine Learning In Robotics”, Robotics Online, Last modified 2018, accessed March 1, 2019, https://www.robotics.org/blog-article.cfm/Applying-Artificial-Intelligence-and-Machine-Learning-in-Robotics/103.

4 vistas0 comentarios

Entradas Recientes

Ver todo