Holzchirurgie

Overview

  • Sectors Respiratory Therapy
  • Posted Jobs 0
  • Viewed 31

Company Description

Symbolic Artificial Intelligence

In synthetic intelligence, symbolic synthetic intelligence (likewise known as classical artificial intelligence or logic-based artificial intelligence) [1] [2] is the term for the collection of all methods in expert system research that are based on top-level symbolic (human-readable) representations of problems, reasoning and search. [3] Symbolic AI used tools such as logic programming, production rules, semantic webs and frames, and it developed applications such as knowledge-based systems (in particular, skilled systems), symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic shows languages, representatives, multi-agent systems, the semantic web, and the strengths and constraints of formal understanding and reasoning systems.

Symbolic AI was the dominant paradigm of AI research from the mid-1950s up until the mid-1990s. [4] Researchers in the 1960s and the 1970s were encouraged that symbolic approaches would eventually prosper in developing a maker with synthetic basic intelligence and considered this the ultimate goal of their field. [citation required] An early boom, with early successes such as the Logic Theorist and Samuel’s Checkers Playing Program, caused unrealistic expectations and guarantees and was followed by the very first AI Winter as moneying dried up. [5] [6] A second boom (1969-1986) occurred with the rise of professional systems, their pledge of capturing corporate expertise, and an enthusiastic corporate accept. [7] [8] That boom, and some early successes, e.g., with XCON at DEC, was followed again by later frustration. [8] Problems with problems in knowledge acquisition, keeping large knowledge bases, and brittleness in dealing with out-of-domain issues developed. Another, 2nd, AI Winter (1988-2011) followed. [9] Subsequently, AI scientists concentrated on attending to underlying problems in dealing with uncertainty and in understanding acquisition. [10] Uncertainty was resolved with formal approaches such as concealed Markov models, Bayesian reasoning, and statistical relational knowing. [11] [12] Symbolic device finding out addressed the knowledge acquisition issue with contributions including Version Space, Valiant’s PAC learning, Quinlan’s ID3 decision-tree knowing, case-based knowing, and inductive logic shows to discover relations. [13]

Neural networks, a subsymbolic method, had been pursued from early days and reemerged highly in 2012. Early examples are Rosenblatt’s perceptron knowing work, the backpropagation work of Rumelhart, Hinton and Williams, [14] and operate in convolutional neural networks by LeCun et al. in 1989. [15] However, neural networks were not seen as successful till about 2012: “Until Big Data ended up being prevalent, the general agreement in the Al neighborhood was that the so-called neural-network approach was helpless. Systems just didn’t work that well, compared to other techniques. … A revolution was available in 2012, when a number of people, consisting of a team of researchers working with Hinton, worked out a way to utilize the power of GPUs to enormously increase the power of neural networks.” [16] Over the next numerous years, deep knowing had magnificent success in handling vision, speech recognition, speech synthesis, image generation, and machine translation. However, because 2020, as fundamental problems with predisposition, explanation, comprehensibility, and effectiveness ended up being more obvious with deep knowing techniques; an increasing variety of AI researchers have required combining the very best of both the symbolic and neural network methods [17] [18] and resolving locations that both approaches have trouble with, such as sensible reasoning. [16]

A short history of symbolic AI to today day follows listed below. Period and titles are drawn from Henry Kautz’s 2020 AAAI Robert S. Engelmore Memorial Lecture [19] and the longer Wikipedia short article on the History of AI, with dates and titles varying somewhat for increased clearness.

The very first AI summer: irrational exuberance, 1948-1966

Success at early efforts in AI occurred in three primary areas: synthetic neural networks, understanding representation, and heuristic search, contributing to high expectations. This section sums up Kautz’s reprise of early AI history.

Approaches inspired by human or animal cognition or behavior

Cybernetic techniques attempted to replicate the feedback loops between animals and their environments. A robotic turtle, with sensors, motors for driving and steering, and 7 vacuum tubes for control, based on a preprogrammed neural web, was constructed as early as 1948. This work can be seen as an early precursor to later operate in neural networks, reinforcement knowing, and situated robotics. [20]

A crucial early symbolic AI program was the Logic theorist, composed by Allen Newell, Herbert Simon and Cliff Shaw in 1955-56, as it had the ability to prove 38 primary theorems from Whitehead and Russell’s Principia Mathematica. Newell, Simon, and Shaw later generalized this work to create a domain-independent problem solver, GPS (General Problem Solver). GPS fixed issues represented with formal operators via state-space search utilizing means-ends analysis. [21]

During the 1960s, symbolic methods accomplished terrific success at mimicing intelligent behavior in structured environments such as game-playing, symbolic mathematics, and theorem-proving. AI research was concentrated in four institutions in the 1960s: Carnegie Mellon University, Stanford, MIT and (later on) University of Edinburgh. Every one developed its own style of research. Earlier methods based on cybernetics or synthetic neural networks were abandoned or pushed into the background.

Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the structures of the field of expert system, in addition to cognitive science, operations research and management science. Their research study team utilized the outcomes of psychological experiments to establish programs that simulated the techniques that people utilized to resolve issues. [22] [23] This tradition, focused at Carnegie Mellon University would eventually culminate in the advancement of the Soar architecture in the center 1980s. [24] [25]

Heuristic search

In addition to the highly specialized domain-specific sort of knowledge that we will see later on used in expert systems, early symbolic AI researchers found another more basic application of knowledge. These were called heuristics, rules of thumb that guide a search in appealing instructions: “How can non-enumerative search be useful when the underlying problem is greatly hard? The approach promoted by Simon and Newell is to utilize heuristics: fast algorithms that might fail on some inputs or output suboptimal solutions.” [26] Another important advance was to find a way to apply these heuristics that guarantees a solution will be discovered, if there is one, not withstanding the periodic fallibility of heuristics: “The A * algorithm offered a basic frame for total and optimum heuristically assisted search. A * is utilized as a subroutine within virtually every AI algorithm today however is still no magic bullet; its warranty of completeness is purchased at the cost of worst-case rapid time. [26]

Early work on knowledge representation and reasoning

Early work covered both applications of official thinking stressing first-order reasoning, along with efforts to manage common-sense thinking in a less official manner.

Modeling formal thinking with logic: the “neats”

Unlike Simon and Newell, John McCarthy felt that machines did not require to mimic the precise systems of human thought, however could instead search for the essence of abstract reasoning and analytical with reasoning, [27] despite whether individuals used the exact same algorithms. [a] His laboratory at Stanford (SAIL) concentrated on using formal logic to resolve a broad variety of issues, including knowledge representation, preparation and learning. [31] Logic was likewise the focus of the work at the University of Edinburgh and elsewhere in Europe which resulted in the advancement of the shows language Prolog and the science of logic shows. [32] [33]

Modeling implicit sensible understanding with frames and scripts: the “scruffies”

Researchers at MIT (such as Marvin Minsky and Seymour Papert) [34] [35] [6] found that resolving tough problems in vision and natural language processing required ad hoc solutions-they argued that no easy and general principle (like reasoning) would record all the elements of intelligent behavior. Roger Schank explained their “anti-logic” approaches as “scruffy” (rather than the “neat” paradigms at CMU and Stanford). [36] [37] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “shabby” AI, given that they should be developed by hand, one complicated idea at a time. [38] [39] [40]

The very first AI winter: crushed dreams, 1967-1977

The very first AI winter was a shock:

During the first AI summer, lots of individuals believed that device intelligence could be attained in simply a couple of years. The Defense Advance Research Projects Agency (DARPA) released programs to support AI research study to use AI to solve issues of nationwide security; in specific, to automate the translation of Russian to English for intelligence operations and to produce autonomous tanks for the battleground. Researchers had begun to realize that accomplishing AI was going to be much more difficult than was expected a decade previously, but a mix of hubris and disingenuousness led many university and think-tank scientists to accept financing with promises of deliverables that they should have known they could not meet. By the mid-1960s neither beneficial natural language translation systems nor self-governing tanks had actually been produced, and a dramatic backlash set in. New DARPA leadership canceled existing AI financing programs.

Outside of the United States, the most fertile ground for AI research was the United Kingdom. The AI winter in the UK was stimulated on not a lot by dissatisfied military leaders as by rival academics who saw AI researchers as charlatans and a drain on research study financing. A professor of used mathematics, Sir James Lighthill, was commissioned by Parliament to assess the state of AI research study in the country. The report stated that all of the issues being dealt with in AI would be much better dealt with by scientists from other disciplines-such as applied mathematics. The report likewise claimed that AI successes on toy issues could never scale to real-world applications due to combinatorial surge. [41]

The 2nd AI summertime: knowledge is power, 1978-1987

Knowledge-based systems

As limitations with weak, domain-independent methods became a growing number of obvious, [42] scientists from all 3 customs started to construct understanding into AI applications. [43] [7] The understanding revolution was driven by the realization that knowledge underlies high-performance, domain-specific AI applications.

Edward Feigenbaum stated:

– “In the knowledge lies the power.” [44]
to describe that high efficiency in a specific domain requires both basic and extremely domain-specific knowledge. Ed Feigenbaum and Doug Lenat called this The Knowledge Principle:

( 1) The Knowledge Principle: if a program is to carry out a complex task well, it must know a great deal about the world in which it operates.
( 2) A plausible extension of that principle, called the Breadth Hypothesis: there are two extra abilities needed for intelligent habits in unforeseen situations: drawing on significantly general knowledge, and analogizing to particular but remote understanding. [45]

Success with specialist systems

This “understanding transformation” led to the development and release of specialist systems (introduced by Edward Feigenbaum), the very first commercially successful kind of AI software application. [46] [47] [48]

Key specialist systems were:

DENDRAL, which found the structure of organic particles from their chemical formula and mass spectrometer readings.
MYCIN, which identified bacteremia – and suggested additional laboratory tests, when required – by analyzing lab results, client history, and physician observations. “With about 450 rules, MYCIN was able to perform as well as some professionals, and substantially much better than junior doctors.” [49] INTERNIST and CADUCEUS which dealt with internal medication diagnosis. Internist tried to catch the know-how of the chairman of internal medicine at the University of Pittsburgh School of Medicine while CADUCEUS could eventually detect up to 1000 different diseases.
– GUIDON, which demonstrated how a knowledge base built for expert problem fixing might be repurposed for teaching. [50] XCON, to set up VAX computer systems, a then laborious process that could use up to 90 days. XCON decreased the time to about 90 minutes. [9]
DENDRAL is thought about the first professional system that relied on knowledge-intensive problem-solving. It is described listed below, by Ed Feigenbaum, from a Communications of the ACM interview, Interview with Ed Feigenbaum:

Among individuals at Stanford interested in computer-based models of mind was Joshua Lederberg, the 1958 Nobel Prize winner in genetics. When I informed him I wanted an induction “sandbox”, he stated, “I have simply the one for you.” His laboratory was doing mass spectrometry of amino acids. The question was: how do you go from looking at the spectrum of an amino acid to the chemical structure of the amino acid? That’s how we started the DENDRAL Project: I was excellent at heuristic search techniques, and he had an algorithm that was great at creating the chemical issue space.

We did not have a grandiose vision. We worked bottom up. Our chemist was Carl Djerassi, inventor of the chemical behind the contraceptive pill, and also one of the world’s most respected mass spectrometrists. Carl and his postdocs were first-rate experts in mass spectrometry. We started to add to their understanding, developing knowledge of engineering as we went along. These experiments amounted to titrating DENDRAL more and more knowledge. The more you did that, the smarter the program ended up being. We had very good outcomes.

The generalization was: in the knowledge lies the power. That was the huge concept. In my profession that is the huge, “Ah ha!,” and it wasn’t the method AI was being done previously. Sounds simple, but it’s most likely AI’s most powerful generalization. [51]

The other expert systems mentioned above came after DENDRAL. MYCIN exemplifies the timeless professional system architecture of a knowledge-base of rules paired to a symbolic reasoning mechanism, including the usage of certainty aspects to deal with unpredictability. GUIDON demonstrates how a specific understanding base can be repurposed for a 2nd application, tutoring, and is an example of an intelligent tutoring system, a particular type of knowledge-based application. Clancey showed that it was not sufficient just to use MYCIN’s guidelines for guideline, however that he also needed to include guidelines for dialogue management and student modeling. [50] XCON is substantial because of the countless dollars it conserved DEC, which set off the specialist system boom where most all significant corporations in the US had expert systems groups, to capture corporate competence, maintain it, and automate it:

By 1988, DEC’s AI group had 40 specialist systems deployed, with more en route. DuPont had 100 in usage and 500 in advancement. Nearly every significant U.S. corporation had its own Al group and was either utilizing or investigating specialist systems. [49]

Chess professional knowledge was encoded in Deep Blue. In 1996, this allowed IBM’s Deep Blue, with the aid of symbolic AI, to win in a video game of chess against the world champion at that time, Garry Kasparov. [52]

Architecture of knowledge-based and skilled systems

A crucial part of the system architecture for all specialist systems is the knowledge base, which shops facts and rules for analytical. [53] The simplest technique for a professional system knowledge base is just a collection or network of production guidelines. Production rules connect signs in a relationship comparable to an If-Then declaration. The expert system processes the rules to make deductions and to determine what additional details it needs, i.e. what concerns to ask, utilizing human-readable signs. For instance, OPS5, CLIPS and their successors Jess and Drools run in this style.

Expert systems can operate in either a forward chaining – from proof to conclusions – or backward chaining – from goals to needed information and requirements – way. Advanced knowledge-based systems, such as Soar can likewise perform meta-level reasoning, that is thinking about their own thinking in regards to deciding how to resolve issues and monitoring the success of analytical strategies.

Blackboard systems are a second kind of knowledge-based or expert system architecture. They model a community of experts incrementally contributing, where they can, to solve an issue. The issue is represented in numerous levels of abstraction or alternate views. The professionals (understanding sources) offer their services whenever they acknowledge they can contribute. Potential analytical actions are represented on a program that is upgraded as the issue situation modifications. A controller chooses how useful each contribution is, and who must make the next analytical action. One example, the BB1 blackboard architecture [54] was originally motivated by studies of how humans plan to perform multiple jobs in a journey. [55] An innovation of BB1 was to use the exact same blackboard design to solving its control problem, i.e., its controller performed meta-level thinking with knowledge sources that monitored how well a strategy or the analytical was continuing and might change from one method to another as conditions – such as goals or times – changed. BB1 has been applied in several domains: building site preparation, smart tutoring systems, and real-time patient monitoring.

The 2nd AI winter, 1988-1993

At the height of the AI boom, business such as Symbolics, LMI, and Texas Instruments were offering LISP devices specifically targeted to accelerate the development of AI applications and research. In addition, a number of expert system companies, such as Teknowledge and Inference Corporation, were offering expert system shells, training, and seeking advice from to corporations.

Unfortunately, the AI boom did not last and Kautz best explains the second AI winter season that followed:

Many reasons can be used for the arrival of the second AI winter season. The hardware companies stopped working when much more economical basic Unix workstations from Sun together with good compilers for LISP and Prolog came onto the market. Many business deployments of professional systems were terminated when they proved too costly to keep. Medical expert systems never ever captured on for a number of reasons: the trouble in keeping them as much as date; the difficulty for medical experts to learn how to use an overwelming range of various specialist systems for various medical conditions; and possibly most crucially, the unwillingness of medical professionals to rely on a computer-made medical diagnosis over their gut impulse, even for specific domains where the expert systems might outperform an average doctor. Equity capital money deserted AI almost over night. The world AI conference IJCAI hosted a huge and luxurious trade show and countless nonacademic guests in 1987 in Vancouver; the main AI conference the list below year, AAAI 1988 in St. Paul, was a small and strictly scholastic affair. [9]

Adding in more extensive structures, 1993-2011

Uncertain thinking

Both analytical methods and extensions to reasoning were attempted.

One analytical approach, concealed Markov designs, had currently been popularized in the 1980s for speech recognition work. [11] Subsequently, in 1988, Judea Pearl popularized using Bayesian Networks as a noise but efficient way of managing uncertain reasoning with his publication of the book Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. [56] and Bayesian methods were used effectively in professional systems. [57] Even later, in the 1990s, analytical relational knowing, a technique that combines probability with logical formulas, permitted possibility to be combined with first-order reasoning, e.g., with either Markov Logic Networks or Probabilistic Soft Logic.

Other, non-probabilistic extensions to first-order reasoning to support were also tried. For example, non-monotonic thinking might be utilized with reality upkeep systems. A truth upkeep system tracked assumptions and justifications for all inferences. It allowed inferences to be withdrawn when presumptions were discovered to be inaccurate or a contradiction was obtained. Explanations could be attended to a reasoning by explaining which rules were applied to develop it and after that continuing through underlying reasonings and rules all the way back to root presumptions. [58] Lofti Zadeh had actually introduced a different kind of extension to handle the representation of vagueness. For example, in choosing how “heavy” or “tall” a male is, there is regularly no clear “yes” or “no” answer, and a predicate for heavy or high would instead return values between 0 and 1. Those values represented to what degree the predicates were true. His fuzzy reasoning even more offered a way for propagating combinations of these worths through logical formulas. [59]

Machine learning

Symbolic maker learning methods were examined to deal with the knowledge acquisition bottleneck. Among the earliest is Meta-DENDRAL. Meta-DENDRAL utilized a generate-and-test technique to generate possible guideline hypotheses to check versus spectra. Domain and task understanding minimized the number of candidates tested to a workable size. Feigenbaum described Meta-DENDRAL as

… the culmination of my dream of the early to mid-1960s involving theory formation. The conception was that you had an issue solver like DENDRAL that took some inputs and produced an output. In doing so, it utilized layers of understanding to guide and prune the search. That knowledge acted due to the fact that we interviewed individuals. But how did the people get the knowledge? By taking a look at thousands of spectra. So we wanted a program that would take a look at countless spectra and presume the understanding of mass spectrometry that DENDRAL could utilize to solve specific hypothesis formation issues. We did it. We were even able to release new understanding of mass spectrometry in the Journal of the American Chemical Society, offering credit just in a footnote that a program, Meta-DENDRAL, actually did it. We had the ability to do something that had actually been a dream: to have a computer system program come up with a new and publishable piece of science. [51]

In contrast to the knowledge-intensive technique of Meta-DENDRAL, Ross Quinlan created a domain-independent method to statistical category, decision tree learning, starting first with ID3 [60] and after that later extending its capabilities to C4.5. [61] The choice trees produced are glass box, interpretable classifiers, with human-interpretable category guidelines.

Advances were made in understanding device learning theory, too. Tom Mitchell introduced version space learning which explains learning as a search through an area of hypotheses, with upper, more basic, and lower, more particular, boundaries including all viable hypotheses constant with the examples seen so far. [62] More officially, Valiant introduced Probably Approximately Correct Learning (PAC Learning), a framework for the mathematical analysis of maker knowing. [63]

Symbolic maker finding out incorporated more than discovering by example. E.g., John Anderson supplied a cognitive model of human knowing where skill practice leads to a compilation of guidelines from a declarative format to a procedural format with his ACT-R cognitive architecture. For instance, a trainee might find out to use “Supplementary angles are 2 angles whose procedures sum 180 degrees” as numerous different procedural rules. E.g., one guideline may say that if X and Y are extra and you know X, then Y will be 180 – X. He called his approach “understanding compilation”. ACT-R has been used successfully to design elements of human cognition, such as learning and retention. ACT-R is also utilized in smart tutoring systems, called cognitive tutors, to successfully teach geometry, computer programming, and algebra to school kids. [64]

Inductive logic programming was another technique to discovering that enabled logic programs to be synthesized from input-output examples. E.g., Ehud Shapiro’s MIS (Model Inference System) might synthesize Prolog programs from examples. [65] John R. Koza used hereditary algorithms to program synthesis to produce genetic programming, which he used to synthesize LISP programs. Finally, Zohar Manna and Richard Waldinger offered a more general technique to program synthesis that manufactures a functional program in the course of showing its specifications to be appropriate. [66]

As an alternative to logic, Roger Schank introduced case-based thinking (CBR). The CBR method described in his book, Dynamic Memory, [67] focuses first on keeping in mind key analytical cases for future use and generalizing them where appropriate. When confronted with a brand-new issue, CBR retrieves the most comparable previous case and adapts it to the specifics of the current problem. [68] Another alternative to reasoning, hereditary algorithms and genetic shows are based upon an evolutionary model of learning, where sets of rules are encoded into populations, the rules govern the behavior of individuals, and choice of the fittest prunes out sets of inappropriate guidelines over many generations. [69]

Symbolic artificial intelligence was applied to learning concepts, guidelines, heuristics, and problem-solving. Approaches, other than those above, consist of:

1. Learning from guideline or advice-i.e., taking human guideline, positioned as recommendations, and determining how to operationalize it in particular scenarios. For example, in a game of Hearts, learning precisely how to play a hand to “prevent taking points.” [70] 2. Learning from exemplars-improving efficiency by accepting subject-matter specialist (SME) feedback throughout training. When problem-solving fails, querying the specialist to either discover a brand-new exemplar for analytical or to learn a new explanation as to precisely why one exemplar is more pertinent than another. For instance, the program Protos learned to diagnose ringing in the ears cases by communicating with an audiologist. [71] 3. Learning by analogy-constructing problem solutions based upon similar problems seen in the past, and after that modifying their services to fit a brand-new scenario or domain. [72] [73] 4. Apprentice learning systems-learning novel options to problems by observing human problem-solving. Domain knowledge discusses why novel services are correct and how the service can be generalized. LEAP learned how to design VLSI circuits by observing human designers. [74] 5. Learning by discovery-i.e., producing jobs to perform experiments and then gaining from the outcomes. Doug Lenat’s Eurisko, for example, learned heuristics to beat human gamers at the Traveller role-playing game for two years in a row. [75] 6. Learning macro-operators-i.e., searching for useful macro-operators to be gained from sequences of fundamental analytical actions. Good macro-operators simplify analytical by permitting issues to be fixed at a more abstract level. [76]
Deep knowing and neuro-symbolic AI 2011-now

With the increase of deep learning, the symbolic AI method has actually been compared to deep learning as complementary “… with parallels having been drawn sometimes by AI researchers between Kahneman’s research on human reasoning and decision making – reflected in his book Thinking, Fast and Slow – and the so-called “AI systems 1 and 2″, which would in principle be designed by deep knowing and symbolic reasoning, respectively.” In this view, symbolic reasoning is more apt for deliberative thinking, preparation, and explanation while deep learning is more apt for quick pattern acknowledgment in affective applications with noisy data. [17] [18]

Neuro-symbolic AI: incorporating neural and symbolic approaches

Neuro-symbolic AI attempts to incorporate neural and symbolic architectures in a manner that addresses strengths and weaknesses of each, in a complementary fashion, in order to support robust AI efficient in reasoning, learning, and cognitive modeling. As argued by Valiant [77] and lots of others, [78] the reliable building and construction of rich computational cognitive models demands the mix of sound symbolic reasoning and effective (maker) learning models. Gary Marcus, likewise, argues that: “We can not build rich cognitive models in a sufficient, automated method without the set of three of hybrid architecture, abundant prior understanding, and advanced strategies for reasoning.”, [79] and in particular: “To construct a robust, knowledge-driven approach to AI we must have the machinery of symbol-manipulation in our toolkit. Too much of useful knowledge is abstract to make do without tools that represent and manipulate abstraction, and to date, the only machinery that we know of that can control such abstract understanding reliably is the device of symbol control. ” [80]

Henry Kautz, [19] Francesca Rossi, [81] and Bart Selman [82] have likewise argued for a synthesis. Their arguments are based on a need to deal with the two type of believing gone over in Daniel Kahneman’s book, Thinking, Fast and Slow. Kahneman explains human thinking as having two elements, System 1 and System 2. System 1 is fast, automatic, instinctive and unconscious. System 2 is slower, step-by-step, and specific. System 1 is the kind used for pattern acknowledgment while System 2 is far much better fit for preparation, deduction, and deliberative thinking. In this view, deep knowing best designs the very first type of thinking while symbolic reasoning finest models the 2nd kind and both are required.

Garcez and Lamb explain research in this area as being ongoing for at least the previous twenty years, [83] dating from their 2002 book on neurosymbolic learning systems. [84] A series of workshops on neuro-symbolic thinking has actually been held every year considering that 2005, see http://www.neural-symbolic.org/ for information.

In their 2015 paper, Neural-Symbolic Learning and Reasoning: Contributions and Challenges, Garcez et al. argue that:

The combination of the symbolic and connectionist paradigms of AI has been pursued by a relatively small research study neighborhood over the last 2 decades and has yielded a number of substantial results. Over the last decade, neural symbolic systems have been shown efficient in conquering the so-called propositional fixation of neural networks, as McCarthy (1988) put it in response to Smolensky (1988 ); see likewise (Hinton, 1990). Neural networks were revealed capable of representing modal and temporal logics (d’Avila Garcez and Lamb, 2006) and fragments of first-order reasoning (Bader, Hitzler, Hölldobler, 2008; d’Avila Garcez, Lamb, Gabbay, 2009). Further, neural-symbolic systems have actually been used to a number of problems in the areas of bioinformatics, control engineering, software verification and adaptation, visual intelligence, ontology learning, and computer games. [78]

Approaches for integration are varied. Henry Kautz’s taxonomy of neuro-symbolic architectures, in addition to some examples, follows:

– Symbolic Neural symbolic-is the current approach of lots of neural models in natural language processing, where words or subword tokens are both the supreme input and output of large language models. Examples consist of BERT, RoBERTa, and GPT-3.
– Symbolic [Neural] -is exemplified by AlphaGo, where symbolic techniques are used to call neural techniques. In this case the symbolic technique is Monte Carlo tree search and the neural techniques discover how to evaluate game positions.
– Neural|Symbolic-uses a neural architecture to interpret perceptual information as symbols and relationships that are then reasoned about symbolically.
– Neural: Symbolic → Neural-relies on symbolic thinking to create or identify training data that is subsequently discovered by a deep knowing design, e.g., to train a neural model for symbolic computation by using a Macsyma-like symbolic mathematics system to produce or identify examples.
– Neural _ Symbolic -utilizes a neural internet that is produced from symbolic guidelines. An example is the Neural Theorem Prover, [85] which constructs a neural network from an AND-OR proof tree produced from understanding base guidelines and terms. Logic Tensor Networks [86] likewise fall under this classification.
– Neural [Symbolic] -enables a neural design to straight call a symbolic reasoning engine, e.g., to perform an action or evaluate a state.

Many key research questions stay, such as:

– What is the finest method to integrate neural and symbolic architectures? [87]- How should symbolic structures be represented within neural networks and extracted from them?
– How should common-sense understanding be discovered and reasoned about?
– How can abstract knowledge that is difficult to encode realistically be dealt with?

Techniques and contributions

This section supplies an introduction of techniques and contributions in an overall context leading to many other, more in-depth posts in Wikipedia. Sections on Machine Learning and Uncertain Reasoning are covered earlier in the history area.

AI programs languages

The key AI programs language in the US throughout the last symbolic AI boom period was LISP. LISP is the second oldest programs language after FORTRAN and was created in 1958 by John McCarthy. LISP provided the very first read-eval-print loop to support rapid program advancement. Compiled functions could be easily combined with analyzed functions. Program tracing, stepping, and breakpoints were likewise supplied, along with the capability to change worths or functions and continue from breakpoints or errors. It had the very first self-hosting compiler, indicating that the compiler itself was initially written in LISP and after that ran interpretively to put together the compiler code.

Other crucial innovations originated by LISP that have actually spread to other programs languages include:

Garbage collection
Dynamic typing
Higher-order functions
Recursion
Conditionals

Programs were themselves information structures that other programs could operate on, enabling the simple definition of higher-level languages.

In contrast to the US, in Europe the crucial AI programs language throughout that same period was Prolog. Prolog supplied a built-in store of facts and clauses that might be queried by a read-eval-print loop. The store could act as an understanding base and the provisions might act as guidelines or a limited kind of reasoning. As a subset of first-order reasoning Prolog was based upon Horn clauses with a closed-world assumption-any realities not known were considered false-and a special name presumption for primitive terms-e.g., the identifier barack_obama was considered to describe exactly one object. Backtracking and unification are integrated to Prolog.

Alain Colmerauer and Philippe Roussel are credited as the inventors of Prolog. Prolog is a kind of reasoning shows, which was created by Robert Kowalski. Its history was also affected by Carl Hewitt’s PLANNER, an assertional database with pattern-directed invocation of techniques. For more information see the area on the origins of Prolog in the PLANNER post.

Prolog is likewise a kind of declarative shows. The reasoning clauses that explain programs are directly interpreted to run the programs specified. No specific series of actions is required, as is the case with important shows languages.

Japan championed Prolog for its Fifth Generation Project, meaning to build unique hardware for high performance. Similarly, LISP makers were constructed to run LISP, however as the second AI boom turned to bust these business could not compete with new workstations that could now run LISP or Prolog natively at similar speeds. See the history area for more information.

Smalltalk was another prominent AI shows language. For instance, it introduced metaclasses and, in addition to Flavors and CommonLoops, influenced the Common Lisp Object System, or (CLOS), that is now part of Common Lisp, the present standard Lisp dialect. CLOS is a Lisp-based object-oriented system that allows several inheritance, in addition to incremental extensions to both classes and metaclasses, thus offering a run-time meta-object protocol. [88]

For other AI shows languages see this list of programs languages for expert system. Currently, Python, a multi-paradigm shows language, is the most popular shows language, partly due to its substantial package library that supports data science, natural language processing, and deep learning. Python consists of a read-eval-print loop, practical elements such as higher-order functions, and object-oriented programming that includes metaclasses.

Search

Search occurs in lots of type of problem solving, including planning, constraint complete satisfaction, and playing video games such as checkers, chess, and go. The best known AI-search tree search algorithms are breadth-first search, depth-first search, A *, and Monte Carlo Search. Key search algorithms for Boolean satisfiability are WalkSAT, conflict-driven provision knowing, and the DPLL algorithm. For adversarial search when playing video games, alpha-beta pruning, branch and bound, and minimax were early contributions.

Knowledge representation and thinking

Multiple various techniques to represent knowledge and after that reason with those representations have been examined. Below is a fast introduction of techniques to understanding representation and automated thinking.

Knowledge representation

Semantic networks, conceptual graphs, frames, and reasoning are all techniques to modeling understanding such as domain understanding, analytical knowledge, and the semantic meaning of language. Ontologies design essential concepts and their relationships in a domain. Example ontologies are YAGO, WordNet, and DOLCE. DOLCE is an example of an upper ontology that can be utilized for any domain while WordNet is a lexical resource that can also be deemed an ontology. YAGO integrates WordNet as part of its ontology, to align truths extracted from Wikipedia with WordNet synsets. The Disease Ontology is an example of a medical ontology currently being utilized.

Description reasoning is a reasoning for automated category of ontologies and for discovering irregular category information. OWL is a language utilized to represent ontologies with description logic. Protégé is an ontology editor that can read in OWL ontologies and after that inspect consistency with deductive classifiers such as such as HermiT. [89]

First-order logic is more basic than description logic. The automated theorem provers gone over listed below can prove theorems in first-order logic. Horn provision logic is more restricted than first-order logic and is utilized in logic programming languages such as Prolog. Extensions to first-order logic consist of temporal logic, to manage time; epistemic reasoning, to reason about agent knowledge; modal logic, to manage possibility and necessity; and probabilistic reasonings to handle logic and possibility together.

Automatic theorem showing

Examples of automated theorem provers for first-order reasoning are:

Prover9.
ACL2.
Vampire.

Prover9 can be utilized in combination with the Mace4 design checker. ACL2 is a theorem prover that can manage proofs by induction and is a descendant of the Boyer-Moore Theorem Prover, likewise called Nqthm.

Reasoning in knowledge-based systems

Knowledge-based systems have an explicit understanding base, usually of guidelines, to enhance reusability throughout domains by separating procedural code and domain knowledge. A different reasoning engine processes guidelines and adds, deletes, or modifies an understanding store.

Forward chaining reasoning engines are the most typical, and are seen in CLIPS and OPS5. Backward chaining happens in Prolog, where a more restricted rational representation is utilized, Horn Clauses. Pattern-matching, specifically marriage, is utilized in Prolog.

A more flexible sort of analytical takes place when thinking about what to do next happens, rather than just picking one of the available actions. This kind of meta-level reasoning is used in Soar and in the BB1 chalkboard architecture.

Cognitive architectures such as ACT-R may have additional capabilities, such as the ability to assemble frequently used knowledge into higher-level portions.

Commonsense reasoning

Marvin Minsky initially proposed frames as a way of translating common visual scenarios, such as an office, and Roger Schank extended this concept to scripts for typical routines, such as eating in restaurants. Cyc has actually tried to catch useful sensible knowledge and has “micro-theories” to deal with specific sort of domain-specific thinking.

Qualitative simulation, such as Benjamin Kuipers’s QSIM, [90] estimates human reasoning about ignorant physics, such as what occurs when we warm a liquid in a pot on the stove. We anticipate it to heat and potentially boil over, even though we may not understand its temperature level, its boiling point, or other information, such as air pressure.

Similarly, Allen’s temporal interval algebra is a simplification of reasoning about time and Region Connection Calculus is a simplification of reasoning about spatial relationships. Both can be resolved with restraint solvers.

Constraints and constraint-based reasoning

Constraint solvers perform a more minimal kind of inference than first-order logic. They can streamline sets of spatiotemporal restrictions, such as those for RCC or Temporal Algebra, together with resolving other kinds of puzzle problems, such as Wordle, Sudoku, cryptarithmetic problems, and so on. Constraint logic programming can be utilized to solve scheduling problems, for example with restriction handling rules (CHR).

Automated planning

The General Problem Solver (GPS) cast preparation as analytical used means-ends analysis to produce strategies. STRIPS took a different method, seeing preparation as theorem proving. Graphplan takes a least-commitment method to planning, instead of sequentially choosing actions from a preliminary state, working forwards, or a goal state if working in reverse. Satplan is a technique to planning where a planning problem is lowered to a Boolean satisfiability problem.

Natural language processing

Natural language processing focuses on treating language as information to perform jobs such as identifying subjects without necessarily understanding the intended meaning. Natural language understanding, on the other hand, constructs a meaning representation and uses that for further processing, such as responding to concerns.

Parsing, tokenizing, spelling correction, part-of-speech tagging, noun and verb phrase chunking are all elements of natural language processing long handled by symbolic AI, but because enhanced by deep knowing techniques. In symbolic AI, discourse representation theory and first-order reasoning have actually been utilized to represent sentence significances. Latent semantic analysis (LSA) and explicit semantic analysis likewise offered vector representations of files. In the latter case, vector parts are interpretable as principles called by Wikipedia articles.

New deep knowing approaches based on Transformer models have actually now eclipsed these earlier symbolic AI methods and obtained cutting edge performance in natural language processing. However, Transformer models are nontransparent and do not yet produce human-interpretable semantic representations for sentences and documents. Instead, they produce task-specific vectors where the meaning of the vector elements is nontransparent.

Agents and multi-agent systems

Agents are autonomous systems embedded in an environment they view and act on in some sense. Russell and Norvig’s basic textbook on synthetic intelligence is arranged to show representative architectures of increasing elegance. [91] The sophistication of agents differs from easy reactive agents, to those with a model of the world and automated preparation abilities, potentially a BDI agent, i.e., one with beliefs, desires, and objectives – or alternatively a support learning model learned gradually to select actions – as much as a mix of alternative architectures, such as a neuro-symbolic architecture [87] that includes deep knowing for perception. [92]

In contrast, a multi-agent system consists of numerous agents that communicate among themselves with some inter-agent interaction language such as Knowledge Query and Manipulation Language (KQML). The representatives need not all have the very same internal architecture. Advantages of multi-agent systems consist of the capability to divide work amongst the agents and to increase fault tolerance when agents are lost. Research problems consist of how agents reach agreement, dispersed problem resolving, multi-agent learning, multi-agent planning, and dispersed restraint optimization.

Controversies occurred from at an early stage in symbolic AI, both within the field-e.g., in between logicists (the pro-logic “neats”) and non-logicists (the anti-logic “scruffies”)- and in between those who accepted AI however declined symbolic approaches-primarily connectionists-and those outside the field. Critiques from outside of the field were mostly from thinkers, on intellectual grounds, however also from funding companies, especially throughout the two AI winter seasons.

The Frame Problem: knowledge representation difficulties for first-order logic

Limitations were discovered in using easy first-order reasoning to reason about dynamic domains. Problems were found both with regards to mentioning the preconditions for an action to be successful and in supplying axioms for what did not change after an action was carried out.

McCarthy and Hayes introduced the Frame Problem in 1969 in the paper, “Some Philosophical Problems from the Standpoint of Artificial Intelligence.” [93] A simple example occurs in “proving that one individual could enter into discussion with another”, as an axiom asserting “if a person has a telephone he still has it after looking up a number in the telephone book” would be required for the reduction to succeed. Similar axioms would be needed for other domain actions to define what did not change.

A similar problem, called the Qualification Problem, happens in attempting to identify the prerequisites for an action to prosper. An unlimited number of pathological conditions can be pictured, e.g., a banana in a tailpipe might prevent an automobile from running properly.

McCarthy’s method to fix the frame problem was circumscription, a type of non-monotonic reasoning where deductions might be made from actions that need only define what would change while not having to clearly specify whatever that would not alter. Other non-monotonic reasonings offered truth upkeep systems that revised beliefs resulting in contradictions.

Other methods of managing more open-ended domains consisted of probabilistic thinking systems and device learning to discover brand-new ideas and guidelines. McCarthy’s Advice Taker can be viewed as a motivation here, as it could incorporate brand-new knowledge supplied by a human in the kind of assertions or guidelines. For instance, speculative symbolic machine finding out systems explored the ability to take high-level natural language suggestions and to interpret it into domain-specific actionable guidelines.

Similar to the issues in dealing with vibrant domains, sensible reasoning is also challenging to capture in formal reasoning. Examples of sensible thinking consist of implicit thinking about how people believe or general knowledge of day-to-day events, things, and living creatures. This type of knowledge is taken for approved and not considered as noteworthy. Common-sense thinking is an open location of research study and challenging both for symbolic systems (e.g., Cyc has attempted to record key parts of this understanding over more than a decade) and neural systems (e.g., self-driving cars and trucks that do not know not to drive into cones or not to hit pedestrians strolling a bike).

McCarthy saw his Advice Taker as having sensible, however his meaning of common-sense was various than the one above. [94] He defined a program as having common sense “if it immediately deduces for itself a sufficiently broad class of instant effects of anything it is told and what it already knows. “

Connectionist AI: philosophical obstacles and sociological conflicts

Connectionist techniques consist of earlier work on neural networks, [95] such as perceptrons; work in the mid to late 80s, such as Danny Hillis’s Connection Machine and Yann LeCun’s advances in convolutional neural networks; to today’s more advanced approaches, such as Transformers, GANs, and other work in deep knowing.

Three philosophical positions [96] have been outlined among connectionists:

1. Implementationism-where connectionist architectures implement the abilities for symbolic processing,
2. Radical connectionism-where symbolic processing is rejected completely, and connectionist architectures underlie intelligence and are totally enough to discuss it,
3. Moderate connectionism-where symbolic processing and connectionist architectures are considered as complementary and both are needed for intelligence

Olazaran, in his sociological history of the controversies within the neural network community, described the moderate connectionism consider as basically compatible with present research in neuro-symbolic hybrids:

The 3rd and last position I wish to analyze here is what I call the moderate connectionist view, a more eclectic view of the present debate between connectionism and symbolic AI. One of the scientists who has actually elaborated this position most clearly is Andy Clark, a theorist from the School of Cognitive and Computing Sciences of the University of Sussex (Brighton, England). Clark safeguarded hybrid (partially symbolic, partly connectionist) systems. He declared that (a minimum of) 2 sort of theories are required in order to study and model cognition. On the one hand, for some information-processing tasks (such as pattern recognition) connectionism has benefits over symbolic designs. But on the other hand, for other cognitive processes (such as serial, deductive thinking, and generative sign control processes) the symbolic paradigm offers appropriate designs, and not just “approximations” (contrary to what radical connectionists would declare). [97]

Gary Marcus has actually claimed that the animus in the deep knowing neighborhood versus symbolic approaches now might be more sociological than philosophical:

To believe that we can merely desert symbol-manipulation is to suspend shock.

And yet, for the a lot of part, that’s how most current AI profits. Hinton and numerous others have actually tried tough to get rid of signs completely. The deep knowing hope-seemingly grounded not a lot in science, but in a sort of historical grudge-is that smart behavior will emerge purely from the confluence of huge data and deep knowing. Where classical computers and software application fix jobs by defining sets of symbol-manipulating rules dedicated to specific tasks, such as modifying a line in a word processor or carrying out an estimation in a spreadsheet, neural networks usually attempt to fix jobs by analytical approximation and gaining from examples.

According to Marcus, Geoffrey Hinton and his colleagues have been emphatically “anti-symbolic”:

When deep knowing reemerged in 2012, it was with a kind of take-no-prisoners attitude that has defined the majority of the last years. By 2015, his hostility towards all things signs had completely taken shape. He lectured at an AI workshop at Stanford comparing signs to aether, among science’s biggest mistakes.

Since then, his anti-symbolic project has just increased in intensity. In 2016, Yann LeCun, Bengio, and Hinton wrote a manifesto for deep knowing in one of science’s crucial journals, Nature. It closed with a direct attack on symbol adjustment, calling not for reconciliation but for outright replacement. Later, Hinton informed an event of European Union leaders that investing any more cash in symbol-manipulating approaches was “a big error,” likening it to buying internal combustion engines in the age of electric vehicles. [98]

Part of these disputes may be due to unclear terminology:

Turing award winner Judea Pearl provides a review of device knowing which, sadly, conflates the terms artificial intelligence and deep learning. Similarly, when Geoffrey Hinton describes symbolic AI, the connotation of the term tends to be that of expert systems dispossessed of any capability to discover. Using the terms needs explanation. Artificial intelligence is not confined to association guideline mining, c.f. the body of work on symbolic ML and relational knowing (the distinctions to deep knowing being the option of representation, localist rational instead of distributed, and the non-use of gradient-based learning algorithms). Equally, symbolic AI is not practically production rules written by hand. An appropriate definition of AI issues understanding representation and thinking, autonomous multi-agent systems, preparation and argumentation, along with learning. [99]

Situated robotics: the world as a design

Another critique of symbolic AI is the embodied cognition technique:

The embodied cognition approach declares that it makes no sense to think about the brain individually: cognition takes place within a body, which is embedded in an environment. We require to study the system as a whole; the brain’s operating exploits consistencies in its environment, consisting of the rest of its body. Under the embodied cognition technique, robotics, vision, and other sensors become central, not peripheral. [100]

Rodney Brooks created behavior-based robotics, one method to embodied cognition. Nouvelle AI, another name for this method, is considered as an alternative to both symbolic AI and connectionist AI. His technique turned down representations, either symbolic or distributed, as not just unneeded, however as damaging. Instead, he produced the subsumption architecture, a layered architecture for embodied agents. Each layer attains a different purpose and should function in the real life. For instance, the very first robot he explains in Intelligence Without Representation, has three layers. The bottom layer interprets sonar sensing units to prevent objects. The middle layer triggers the robot to roam around when there are no obstacles. The leading layer triggers the robot to go to more far-off locations for further exploration. Each layer can temporarily prevent or suppress a lower-level layer. He slammed AI researchers for specifying AI issues for their systems, when: “There is no tidy department in between understanding (abstraction) and reasoning in the real world.” [101] He called his robotics “Creatures” and each layer was “made up of a fixed-topology network of simple limited state devices.” [102] In the Nouvelle AI method, “First, it is essential to test the Creatures we build in the real world; i.e., in the same world that we human beings occupy. It is dreadful to fall under the temptation of evaluating them in a streamlined world initially, even with the very best intentions of later moving activity to an unsimplified world.” [103] His emphasis on real-world testing was in contrast to “Early work in AI focused on games, geometrical issues, symbolic algebra, theorem proving, and other formal systems” [104] and the usage of the blocks world in symbolic AI systems such as SHRDLU.

Current views

Each approach-symbolic, connectionist, and behavior-based-has advantages, however has actually been criticized by the other techniques. Symbolic AI has actually been slammed as disembodied, accountable to the credentials issue, and bad in handling the perceptual problems where deep learning excels. In turn, connectionist AI has been criticized as poorly suited for deliberative step-by-step problem solving, integrating understanding, and managing preparation. Finally, Nouvelle AI stands out in reactive and real-world robotics domains however has actually been slammed for problems in including knowing and knowledge.

Hybrid AIs including one or more of these approaches are presently considered as the path forward. [19] [81] [82] Russell and Norvig conclude that:

Overall, Dreyfus saw areas where AI did not have total responses and stated that Al is therefore difficult; we now see much of these same locations going through ongoing research and advancement causing increased capability, not impossibility. [100]

Expert system.
Automated preparation and scheduling
Automated theorem proving
Belief modification
Case-based thinking
Cognitive architecture
Cognitive science
Connectionism
Constraint shows
Deep learning
First-order reasoning
GOFAI
History of expert system
Inductive reasoning programs
Knowledge-based systems
Knowledge representation and thinking
Logic programs
Artificial intelligence
Model checking
Model-based thinking
Multi-agent system
Natural language processing
Neuro-symbolic AI
Ontology
Philosophy of synthetic intelligence
Physical sign systems hypothesis
Semantic Web
Sequential pattern mining
Statistical relational knowing
Symbolic mathematics
YAGO ontology
WordNet

Notes

^ McCarthy when stated: “This is AI, so we don’t care if it’s psychologically real”. [4] McCarthy reiterated his position in 2006 at the AI@50 conference where he stated “Artificial intelligence is not, by definition, simulation of human intelligence”. [28] Pamela McCorduck writes that there are “2 significant branches of synthetic intelligence: one focused on producing smart habits no matter how it was achieved, and the other intended at modeling intelligent procedures discovered in nature, particularly human ones.”, [29] Stuart Russell and Peter Norvig wrote “Aeronautical engineering texts do not define the goal of their field as making ‘makers that fly so exactly like pigeons that they can trick even other pigeons.'” [30] Citations

^ Garnelo, Marta; Shanahan, Murray (October 2019). “Reconciling deep learning with symbolic expert system: representing objects and relations”. Current Opinion in Behavioral Sciences. 29: 17-23. doi:10.1016/ j.cobeha.2018.12.010. hdl:10044/ 1/67796.
^ Thomason, Richmond (February 27, 2024). “Logic-Based Expert System”. In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
^ Garnelo, Marta; Shanahan, Murray (2019-10-01). “Reconciling deep learning with symbolic expert system: representing items and relations”. Current Opinion in Behavioral Sciences. 29: 17-23. doi:10.1016/ j.cobeha.2018.12.010. hdl:10044/ 1/67796. S2CID 72336067.
^ a b Kolata 1982.
^ Kautz 2022, pp. 107-109.
^ a b Russell & Norvig 2021, p. 19.
^ a b Russell & Norvig 2021, pp. 22-23.
^ a b Kautz 2022, pp. 109-110.
^ a b c Kautz 2022, p. 110.
^ Kautz 2022, pp. 110-111.
^ a b Russell & Norvig 2021, p. 25.
^ Kautz 2022, p. 111.
^ Kautz 2020, pp. 110-111.
^ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986 ). “Learning representations by back-propagating errors”. Nature. 323 (6088 ): 533-536. Bibcode:1986 Natur.323..533 R. doi:10.1038/ 323533a0. ISSN 1476-4687. S2CID 205001834.
^ LeCun, Y.; Boser, B.; Denker, I.; Henderson, D.; Howard, R.; Hubbard, W.; Tackel, L. (1989 ). “Backpropagation Applied to Handwritten Zip Code Recognition”. Neural Computation. 1 (4 ): 541-551. doi:10.1162/ neco.1989.1.4.541. S2CID 41312633.
^ a b Marcus & Davis 2019.
^ a b Rossi, Francesca. “Thinking Fast and Slow in AI”. AAAI. Retrieved 5 July 2022.
^ a b Selman, Bart. “AAAI Presidential Address: The State of AI“. AAAI. Retrieved 5 July 2022.
^ a b c Kautz 2020.
^ Kautz 2022, p. 106.
^ Newell & Simon 1972.
^ & McCorduck 2004, pp. 139-179, 245-250, 322-323 (EPAM).
^ Crevier 1993, pp. 145-149.
^ McCorduck 2004, pp. 450-451.
^ Crevier 1993, pp. 258-263.
^ a b Kautz 2022, p. 108.
^ Russell & Norvig 2021, p. 9 (logicist AI), p. 19 (McCarthy’s work).
^ Maker 2006.
^ McCorduck 2004, pp. 100-101.
^ Russell & Norvig 2021, p. 2.
^ McCorduck 2004, pp. 251-259.
^ Crevier 1993, pp. 193-196.
^ Howe 1994.
^ McCorduck 2004, pp. 259-305.
^ Crevier 1993, pp. 83-102, 163-176.
^ McCorduck 2004, pp. 421-424, 486-489.
^ Crevier 1993, p. 168.
^ McCorduck 2004, p. 489.
^ Crevier 1993, pp. 239-243.
^ Russell & Norvig 2021, p. 316, 340.
^ Kautz 2022, p. 109.
^ Russell & Norvig 2021, p. 22.
^ McCorduck 2004, pp. 266-276, 298-300, 314, 421.
^ Shustek, Len (June 2010). “An interview with Ed Feigenbaum”. Communications of the ACM. 53 (6 ): 41-45. doi:10.1145/ 1743546.1743564. ISSN 0001-0782. S2CID 10239007. Retrieved 2022-07-14.
^ Lenat, Douglas B; Feigenbaum, Edward A (1988 ). “On the limits of knowledge”. Proceedings of the International Workshop on Expert System for Industrial Applications: 291-300. doi:10.1109/ AIIA.1988.13308. S2CID 11778085.
^ Russell & Norvig 2021, pp. 22-24.
^ McCorduck 2004, pp. 327-335, 434-435.
^ Crevier 1993, pp. 145-62, 197-203.
^ a b Russell & Norvig 2021, p. 23.
^ a b Clancey 1987.
^ a b Shustek, Len (2010 ). “An interview with Ed Feigenbaum”. Communications of the ACM. 53 (6 ): 41-45. doi:10.1145/ 1743546.1743564. ISSN 0001-0782. S2CID 10239007. Retrieved 2022-08-05.
^ “The fascination with AI: what is synthetic intelligence?”. IONOS Digitalguide. Retrieved 2021-12-02.
^ Hayes-Roth, Murray & Adelman 2015.
^ Hayes-Roth, Barbara (1985 ). “A chalkboard architecture for control”. Expert system. 26 (3 ): 251-321. doi:10.1016/ 0004-3702( 85 )90063-3.
^ Hayes-Roth, Barbara (1980 ). Human Planning Processes. RAND.
^ Pearl 1988.
^ Spiegelhalter et al. 1993.
^ Russell & Norvig 2021, pp. 335-337.
^ Russell & Norvig 2021, p. 459.
^ Quinlan, J. Ross. “Chapter 15: Learning Efficient Classification Procedures and their Application to Chess End Games”. In Michalski, Carbonell & Mitchell (1983 ).
^ Quinlan, J. Ross (1992-10-15). C4.5: Programs for Artificial Intelligence (1st ed.). San Mateo, Calif: Morgan Kaufmann. ISBN 978-1-55860-238-0.
^ Mitchell, Tom M.; Utgoff, Paul E.; Banerji, Ranan. “Chapter 6: Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics”. In Michalski, Carbonell & Mitchell (1983 ).
^ Valiant, L. G. (1984-11-05). “A theory of the learnable”. Communications of the ACM. 27 (11 ): 1134-1142. doi:10.1145/ 1968.1972. ISSN 0001-0782. S2CID 12837541.
^ Koedinger, K. R.; Anderson, J. R.; Hadley, W. H.; Mark, M. A.; others (1997 ). “Intelligent tutoring goes to school in the huge city”. International Journal of Artificial Intelligence in Education (IJAIED). 8: 30-43. Retrieved 2012-08-18.
^ Shapiro, Ehud Y (1981 ). “The Model Inference System”. Proceedings of the 7th international joint conference on Artificial intelligence. IJCAI. Vol. 2. p. 1064.
^ Manna, Zohar; Waldinger, Richard (1980-01-01). “A Deductive Approach to Program Synthesis”. ACM Trans. Program. Lang. Syst. 2 (1 ): 90-121. doi:10.1145/ 357084.357090. S2CID 14770735.
^ Schank, Roger C. (1983-01-28). Dynamic Memory: A Theory of Reminding and Learning in Computers and People. Cambridge Cambridgeshire: New York: Cambridge University Press. ISBN 978-0-521-27029-8.
^ Hammond, Kristian J. (1989-04-11). Case-Based Planning: Viewing Planning as a Memory Task. Boston: Academic Press. ISBN 978-0-12-322060-8.
^ Koza, John R. (1992-12-11). Genetic Programming: On the Programming of Computers by Means of Natural Selection (1st ed.). Cambridge, Mass: A Bradford Book. ISBN 978-0-262-11170-6.
^ Mostow, David Jack. ” 12: Machine Transformation of Advice into a Heuristic Search Procedure”. In Michalski, Carbonell & Mitchell (1983 ).
^ Bareiss, Ray; Porter, Bruce; Wier, Craig. “Chapter 4: Protos: An Exemplar-Based Learning Apprentice”. In Michalski, Carbonell & Mitchell (1986 ), pp. 112-139.
^ Carbonell, Jaime. “Chapter 5: Learning by Analogy: Formulating and Generalizing Plans from Past Experience”. In Michalski, Carbonell & Mitchell (1983 ), pp. 137-162.
^ Carbonell, Jaime. “Chapter 14: Derivational Analogy: A Theory of Reconstructive Problem Solving and Expertise Acquisition”. In Michalski, Carbonell & Mitchell (1986 ), pp. 371-392.
^ Mitchell, Tom; Mabadevan, Sridbar; Steinberg, Louis. “Chapter 10: LEAP: A Learning Apprentice for VLSI Design”. In Kodratoff & Michalski (1990 ), pp. 271-289.
^ Lenat, Douglas. “Chapter 9: The Role of Heuristics in Learning by Discovery: Three Case Studies”. In Michalski, Carbonell & Mitchell (1983 ), pp. 243-306.
^ Korf, Richard E. (1985 ). Learning to Solve Problems by Searching for Macro-Operators. Research Notes in Artificial Intelligence. Pitman Publishing. ISBN 0-273-08690-1.
^ Valiant 2008.
^ a b Garcez et al. 2015.
^ Marcus 2020, p. 44.
^ Marcus 2020, p. 17.
^ a b Rossi 2022.
^ a b Selman 2022.
^ Garcez & Lamb 2020, p. 2.
^ Garcez et al. 2002.
^ Rocktäschel, Tim; Riedel, Sebastian (2016 ). “Learning Knowledge Base Inference with Neural Theorem Provers”. Proceedings of the 5th Workshop on Automated Knowledge Base Construction. San Diego, CA: Association for Computational Linguistics. pp. 45-50. doi:10.18653/ v1/W16 -1309. Retrieved 2022-08-06.
^ Serafini, Luciano; Garcez, Artur d’Avila (2016 ), Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge, arXiv:1606.04422.
^ a b Garcez, Artur d’Avila; Lamb, Luis C.; Gabbay, Dov M. (2009 ). Neural-Symbolic Cognitive Reasoning (1st ed.). Berlin-Heidelberg: Springer. Bibcode:2009 nscr.book … D. doi:10.1007/ 978-3-540-73246-4. ISBN 978-3-540-73245-7. S2CID 14002173.
^ Kiczales, Gregor; Rivieres, Jim des; Bobrow, Daniel G. (1991-07-30). The Art of the Metaobject Protocol (1st ed.). Cambridge, Mass: The MIT Press. ISBN 978-0-262-61074-2.
^ Motik, Boris; Shearer, Rob; Horrocks, Ian (2009-10-28). “Hypertableau Reasoning for Description Logics”. Journal of Expert System Research. 36: 165-228. arXiv:1401.3485. doi:10.1613/ jair.2811. ISSN 1076-9757. S2CID 190609.
^ Kuipers, Benjamin (1994 ). Qualitative Reasoning: Modeling and Simulation with Incomplete Knowledge. MIT Press. ISBN 978-0-262-51540-5.
^ Russell & Norvig 2021.
^ Leo de Penning, Artur S. d’Avila Garcez, Luís C. Lamb, John-Jules Ch. Meyer: “A Neural-Symbolic Cognitive Agent for Online Learning and Reasoning.” IJCAI 2011: 1653-1658.
^ McCarthy & Hayes 1969.
^ McCarthy 1959.
^ Nilsson 1998, p. 7.
^ Olazaran 1993, pp. 411-416.
^ Olazaran 1993, pp. 415-416.
^ Marcus 2020, p. 20.
^ Garcez & Lamb 2020, p. 8.
^ a b Russell & Norvig 2021, p. 982.
^ Brooks 1991, p. 143.
^ Brooks 1991, p. 151.
^ Brooks 1991, p. 150.
^ Brooks 1991, p. 142.
References

Brooks, Rodney A. (1991 ). “Intelligence without representation”. Expert system. 47 (1 ): 139-159. doi:10.1016/ 0004-3702( 91 )90053-M. ISSN 0004-3702. S2CID 207507849. Retrieved 2022-09-13.
Clancey, William (1987 ). Knowledge-Based Tutoring: The GUIDON Program (MIT Press Series in Expert System) (Hardcover ed.).
Crevier, Daniel (1993 ). AI: The Tumultuous Look For Artificial Intelligence. New York City, NY: BasicBooks. ISBN 0-465-02997-3.
Dreyfus, Hubert L (1981 ). “From micro-worlds to knowledge representation: AI at a deadlock” (PDF). Mind Design. MIT Press, Cambridge, MA: 161-204.
Garcez, Artur S. d’Avila; Broda, Krysia; Gabbay, Dov M.; Gabbay, Augustus de Morgan Professor of Logic Dov M. (2002 ). Neural-Symbolic Learning Systems: Foundations and Applications. Springer Science & Business Media. ISBN 978-1-85233-512-0.
Garcez, Artur; Besold, Tarek; De Raedt, Luc; Földiák, Peter; Hitzler, Pascal; Icard, Thomas; Kühnberger, Kai-Uwe; Lamb, Luís; Miikkulainen, Risto; Silver, Daniel (2015 ). Neural-Symbolic Learning and Reasoning: Contributions and Challenges. AAI Spring Symposium – Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches. Stanford, CA: AAAI Press. doi:10.13140/ 2.1.1779.4243.
Garcez, Artur d’Avila; Gori, Marco; Lamb, Luis C.; Serafini, Luciano; Spranger, Michael; Tran, Son N. (2019 ), Neural-Symbolic Computing: An Efficient Methodology for Principled Integration of Artificial Intelligence and Reasoning, arXiv:1905.06088.
Garcez, Artur d’Avila; Lamb, Luis C. (2020 ), Neurosymbolic AI: The 3rd Wave, arXiv:2012.05876.
Haugeland, John (1985 ), Artificial Intelligence: The Very Idea, Cambridge, Mass: MIT Press, ISBN 0-262-08153-9.
Hayes-Roth, Frederick; Murray, William; Adelman, Leonard (2015 ). “Expert systems”. AccessScience. doi:10.1036/ 1097-8542.248550.
Honavar, Vasant; Uhr, Leonard (1994 ). Symbolic Expert System, Connectionist Networks & Beyond (Technical report). Iowa State University Digital Repository, Computer Science Technical Reports. 76. p. 6.
Honavar, Vasant (1995 ). Symbolic Expert System and Numeric Artificial Neural Networks: Towards a Resolution of the Dichotomy. The Springer International Series In Engineering and Computer Technology. Springer US. pp. 351-388. doi:10.1007/ 978-0-585-29599-2_11.
Howe, J. (November 1994). “Expert System at Edinburgh University: a Viewpoint”. Archived from the initial on 15 May 2007. Retrieved 30 August 2007.
Kautz, Henry (2020-02-11). The Third AI Summer, Henry Kautz, AAAI 2020 Robert S. Engelmore Memorial Award Lecture. Retrieved 2022-07-06.
Kautz, Henry (2022 ). “The Third AI Summer: AAAI Robert S. Engelmore Memorial Lecture”. AI Magazine. 43 (1 ): 93-104. doi:10.1609/ aimag.v43i1.19122. ISSN 2371-9621. S2CID 248213051. Retrieved 2022-07-12.
Kodratoff, Yves; Michalski, Ryszard, eds. (1990 ). Artificial intelligence: an Artificial Intelligence Approach. Vol. III. San Mateo, Calif.: Morgan Kaufman. ISBN 0-934613-09-5. OCLC 893488404.
Kolata, G. (1982 ). “How can computer systems get typical sense?”. Science. 217 (4566 ): 1237-1238. Bibcode:1982 Sci … 217.1237 K. doi:10.1126/ science.217.4566.1237. PMID 17837639.
Maker, Meg Houston (2006 ). “AI@50: AI Past, Present, Future”. Dartmouth College. Archived from the original on 3 January 2007. Retrieved 16 October 2008.
Marcus, Gary; Davis, Ernest (2019 ). Rebooting AI: Building Expert System We Can Trust. New York: Pantheon Books. ISBN 9781524748258. OCLC 1083223029.
Marcus, Gary (2020 ), The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence, arXiv:2002.06177.
McCarthy, John (1959 ). PROGRAMS WITH SOUND JUDGMENT. Symposium on Mechanization of Thought Processes. NATIONAL PHYSICAL LABORATORY, TEDDINGTON, UK. p. 8.
McCarthy, John; Hayes, Patrick (1969 ). “Some Philosophical Problems From the Standpoint of Artificial Intelligence”. Machine Intelligence 4. B. Meltzer, Donald Michie (eds.): 463-502.
McCorduck, Pamela (2004 ), Machines Who Think (second ed.), Natick, Massachusetts: A. K. Peters, ISBN 1-5688-1205-1.
Michalski, Ryszard; Carbonell, Jaime; Mitchell, Tom, eds. (1983 ). Machine Learning: an Expert System Approach. Vol. I. Palo Alto, Calif.: Tioga Publishing Company. ISBN 0-935382-05-4. OCLC 9262069.
Michalski, Ryszard; Carbonell, Jaime; Mitchell, Tom, eds. (1986 ). Machine Learning: an Artificial Intelligence Approach. Vol. II. Los Altos, Calif.: Morgan Kaufman. ISBN 0-934613-00-1.
Newell, Allen; Simon, Herbert A. (1972 ). Human Problem Solving (1st ed.). Englewood Cliffs, New Jersey: Prentice Hall. ISBN 0-13-445403-0.
Newell, Allen; Simon, H. A. (1976 ). “Computer Science as Empirical Inquiry: Symbols and Search”. Communications of the ACM. 19 (3 ): 113-126. doi:10.1145/ 360018.360022.
Nilsson, Nils (1998 ). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the initial on 26 July 2020. Retrieved 18 November 2019.
Olazaran, Mikel (1993-01-01), “A Sociological History of the Neural Network Controversy”, in Yovits, Marshall C. (ed.), Advances in Computers Volume 37, vol. 37, Elsevier, pp. 335-425, doi:10.1016/ S0065-2458( 08 )60408-8, ISBN 9780120121373, retrieved 2023-10-31.
Pearl, J. (1988 ). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, California: Morgan Kaufmann. ISBN 978-1-55860-479-7. OCLC 249625842.
Russell, Stuart J.; Norvig, Peter (2021 ). Artificial Intelligence: A Modern Approach (fourth ed.). Hoboken: Pearson. ISBN 978-0-13-461099-3. LCCN 20190474.
Rossi, Francesca (2022-07-06). “AAAI2022: Thinking Fast and Slow in AI (AAAI 2022 Invited Talk)”. Retrieved 2022-07-06.
Selman, Bart (2022-07-06). “AAAI2022: Presidential Address: The State of AI”. Retrieved 2022-07-06.
Serafini, Luciano; Garcez, Artur d’Avila (2016-07-07), Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge, arXiv:1606.04422.
Spiegelhalter, David J.; Dawid, A. Philip; Lauritzen, Steffen; Cowell, Robert G. (1993 ). “Bayesian analysis in professional systems”. Statistical Science. 8 (3 ).
Turing, A. M. (1950 ). “I.-Computing Machinery and Intelligence”. Mind. LIX (236 ): 433-460. doi:10.1093/ mind/LIX.236.433. ISSN 0026-4423. Retrieved 2022-09-14.
Valiant, Leslie G (2008 ). “Knowledge Infusion: In Pursuit of Robustness in Artificial Intelligence”. In Hariharan, R.; Mukund, M.; Vinay, V. (eds.). Foundations of Software Technology and Theoretical Computer Science (Bangalore). pp. 415-422.
Xifan Yao; Jiajun Zhou; Jiangming Zhang; Claudio R. Boer (2017 ). From Intelligent Manufacturing to Smart Manufacturing for Industry 4.0 Driven by Next Generation Expert System and Further On.