Artificial intelligence part - 1 - Techcomlite.blogspot.com

Latest

Stylish Time Display

Sunday, April 23, 2023

Artificial intelligence part - 1

                         ARTIFICIAL  INTELLIGENCE (AI) 

                                  Man has done various experiments since his inception. Among these experiments, his research to create intelligence is a remarkable event in human history. Accordingly, the advanced artificial intelligence available in modern times and various research for it has been carried out for a long time. Accordingly, I hope to bring to you many important points related to artificial intelligence. However, since this is a complex subject, it will have to be brought under a number of sections. Accordingly, this is its first blog.

                    

   The rise of artificial intelligence  

                    

                           As the first blog in the blog series related to artificial intelligence, we will bring you a lot of important information related to the development of artificial intelligence.

                                              The early years of AI are characterised by tremendous enthusiasm, great ideas and very limited success. Only a few years before, computers had been introduced to perform routine mathematical calculations, but now AI researchers were demonstrating that computers could do more than that. It was an era of great expectations.
                                             John McCarthy, one of the organisers of the Dartmouth workshop and the inventor of the term ‘artificial intelligence’, moved from Dartmouth to MIT. He defined the high-level language LISP – one of the oldest programming languages (FORTRAN is just two years older), which is still in current use. In 1958, McCarthy presented a paper, ‘Programs with Common Sense’, in which he proposed a program called the Advice Taker to search for solutions to general problems of the world (McCarthy, 1958). McCarthy demonstrated how his program could generate, for example, a plan to drive to the airport, based on some simple axioms. Most importantly, the program was designed to accept new axioms, or in other words new knowledge, in different areas of expertise without being reprogrammed. Thus the Advice Taker was the first complete knowledgebased system incorporating the central principles of knowledge representation and reasoning.

                                        
                                                           

 
                                                             Another organiser of the Dartmouth workshop, Marvin Minsky, also moved to MIT. However, unlike McCarthy with his focus on formal logic, Minsky developed an anti-logical outlook on knowledge representation and reasoning. His theory of frames (Minsky, 1975) was a major contribution to knowledge engineering. The early work on neural computing and artificial neural networks started by McCulloch and Pitts was continued. 
                                                                                  Learning methods were improved and Frank Rosenblatt proved the perceptron convergence theorem, demonstrating that his learning algorithm could adjust the connection strengths of a perceptron (Rosenblatt, 1962). One of the most ambitious projects of the era of great expectations was the General Problem Solver (GPS) (Newell and Simon, 1961, 1972). Allen Newell and Herbert Simon from the Carnegie Mellon University developed a generalpurpose program to simulate human problem-solving methods. GPS was probably the first attempt to separate the problem-solving technique from the data. It was based on the technique now referred to as means-ends analysis.

                                   




                                 Newell and Simon postulated that a problem to be solved could be defined in terms of states. The means-ends analysis was used to determine a difference between the current state and the desirable state or the goal state of the problem, and to choose and apply operators to reach the goal state. If the goal state could not be immediately reached from the current state, a new state closer to the goal would be established and the procedure repeated until the goal state was reached. The set of operators determined the solution plan. 

                                                          However, GPS failed to solve complicated problems. The program was based on formal logic and therefore could generate an infinite number of possible operators, which is inherently inefficient. The amount of computer time and memory that GPS required to solve real-world problems led to the project being abandoned. In summary, we can say that in the 1960s, AI researchers attempted to simulate the complex thinking process by inventing general methods for solving broad classes of problems. They used the general-purpose search mechanism to find a solution to the problem. Such approaches, now referred to as weak methods, applied weak information about the problem domain; this resulted in weak performance of the programs developed. 

                                                                          However, it was also a time when the field of AI attracted great scientists who introduced fundamental new ideas in such areas as knowledge representation, learning algorithms, neural computing and computing with words. These ideas could not be implemented then because of the limited capabilities of computers, but two decades later they have led to the development of real-life practical applications. It is interesting to note that Lotfi Zadeh, a professor from the University of California at Berkeley, published his famous paper ‘Fuzzy sets’ also in the 1960s (Zadeh, 1965). This paper is now considered the foundation of the fuzzy set theory. Two decades later, fuzzy researchers have built hundreds of smart machines and intelligent systems. By 1970, the euphoria about AI was gone, and most government funding for AI projects was cancelled. AI was still a relatively new field, academic in nature, with few practical applications apart from playing games (Samuel, 1959, 1967; Greenblatt et al., 1967). So, to the outsider, the achievements would be seen as toys, as no AI system at that time could manage real-world problems.



  THE SUMMARY                                  





The early years of AI were marked by attempts to simulate complex thinking processes by inventing general methods for solving broad classes of problems. These approaches, now referred to as weak methods, applied weak information about the problem domain, resulting in weak performance of the programs developed. However, it was also a time when the field of AI attracted great scientists who introduced fundamental new ideas in such areas as knowledge representation, learning algorithms, neural computing, and computing with words. These ideas could not be implemented then because of the limited capabilities of computers, but two decades later they led to the development of real-life practical applications.

The essay concludes by noting that by 1970, the euphoria about AI was gone, and most government funding for AI projects was cancelled. AI was still a relatively new field, academic in nature, with few practical applications apart from playing games. To the outsider, the achievements would be seen as toys, as no AI system at that time could manage real-world problems.


This is the end of AI PART - 01. See you in PART 2.


Thank you.

Have a nice day.


No comments:

Post a Comment

Ads