A
Action-Based Planning
The goal of action-based planning is to determine how to decompose a high level action into a network of subactions that perform the requisite task. Therefore the major task within such a planning system is to manage the constraints that apply to the interrelationships (e.g., ordering constraints) between actions. In fact, action-based planning is best viewed as a constraint satisfaction problem.
The search for a plan cycles through the following steps: choose a constraint and apply the constraint check; if the constraint is not satisfied, choose a bug from the set of constraint bugs; choose and apply a fix, yielding a new plan and possibly a new set of constraints to check.
In contrast, state-based planners generally conduct their search for a plan by reasoning about how the actions within a plan affect the state of the world and how the state of the world affects the applicability of actions.
Adaptive Interface
A computer interface that automatically and dynamically adapts to the needs and competence of each individual user of the software.
Adversial Machine Learning (AML)
The process of extracting information about the characterisitics and behavior of an ML system and/or learning how to manipulate the inputs into an ML system to obtain a preferred outcome.
Agents
Agents are software programs that are capable of autonomous, flexible, purposeful and reasoning action in pursuit of one or more goals. They are designed to take timely action in response to external stimuli from their environment on behalf of a human. When multiple agents are being used together in a system, individual agents are expected to interact together as appropriate to achieve the goals of the overall system. Also called autonomous agents, assistants, brokers, bots, droids, intelligent agents, software agents.
Agent Architecture
There are two levels of agent architecture, when a number of agents are to work together for a common goal. There is the architecture of the system of agents, that will determine how they work together, and which does not need to be concerned with how individual agents fulfil their sub-missions; and the architecture of each individual agent, which does determine its inner workings.
The architecture of one software agent will permit interactions among most of the following components (depending on the agent’s goals): perceptors, effectors, communication channels, a state model, a model-based reasoner, a planner/scheduler, a reactive execution monitor, its reflexes (which enable the agent to react immediately to changes in its environment that it can’t wait on the planner to deal with), and its goals. The perceptors, effectors, and communication channels will also enable interaction with the agent’s outside world.
AI Effect
The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don’t use the term “artificial intelligence” even when their company’s products rely on some AI techniques. Why not? It may be because AI was oversold in the first giddy days of practical rule-based expert systems in the 1980s, with the peak perhaps marked by the Business Week cover of July 9, 1984 announcing, Artificial Intelligence, IT’S HERE.
James Hogan in his book, Mind Matters, has his own explanation of the AI Effect:
“AI researchers talk about a peculiar phenomenon known as the “AI effect.” At the outset of a project, the goal is to entice a performance from machines in some designated area that everyone agrees would require “intelligence” if done by a human. If the project fails, it becomes a target of derision to be pointed at by the skeptics as an example of the absurdity of the idea that AI could be possible. If it succeeds, with the process demystified and its inner workings laid bare as lines of prosaic computer code, the subject is dismissed as “not really all that intelligent after all.” Perhaps … the real threat that we resist is the further demystification of ourselves…It seems to happen repeatedly that a line of AI work … finds itself being diverted in such a direction that … the measures that were supposed to mark its attainment are demonstrated brilliantly. Then, the resulting new knowledge typically stimulates demands for application of it and a burgeoning industry, market, and additional facet to our way of life comes into being, which within a decade we take for granted; but by then, of course, it isn’t AI.”
AI Languages and Tools
AI software has different requirements from other, conventional software. Therefore, specific languages for AI software have been developed. These include LISP, Prolog, and Smalltalk. While these languages often reduce the time to develop an artificial intelligence application, they can lengthen the time to execute the application. Therefore, much AI software is now written in languages such as C++ and Java, which typically increases development time, but shortens execution time. Also, to reduce the cost of AI software, a range of commercial software development tools have also been developed. Stottler Henke has developed its own proprietary tools for some of the specialized applications it is experienced in creating.
Algorithm
An algorithm is a set of instructions that explain how to solve a problem. It is usually first stated in English and arithmetic, and from this, a programmer can translate it into executable code (that is, code to be run on a computer).
Anomaly Detection
Anomaly detection consists of examing specific data points and detecting rare occurrences that seem suspicious because of how they differentiate from their usual pattern of behaviors. Related Publication.
Applications of Artificial Intelligence
The actual and potential applications are virtually endless. Reviewing Stottler Henke’s work will give you some idea of the range. In general, AI applications are used to increase the productivity of knowledge workers by intelligently automating their tasks; or to make technical products of all kinds easier to use for both workers and consumers by intelligent automation of different aspects of the functionality of complex products.
Artificial Intelligence
Artificial intelligence (AI) is the mimicking of human thought and cognitive processes to solve complex problems automatically. AI uses techniques for writing computer code to represent and manipulate knowledge. Different techniques mimic the different ways that people think and reason (see Case-Based Reasoning and Model-Based Reasoning for examples). AI applications can be either stand-alone software, such as decision support software, or embedded within larger software or hardware systems.
AI has been around for about 50 years and while early optimism about matching human reasoning capabilities quickly has not been realized yet, there is a significant and growing set of valuable applications. AI hasn’t yet mimicked much of the common-sense reasoning of a five-year old child. Nevertheless, it can successfully mimic many expert tasks performed by trained adults, and there is probably more artificial intelligence being used in practice in one form or another than most people realize.
Really intelligent applications will only be achievable with artificial intelligence and it is the mark of a successful designer of AI software to deliver functionality that can’t be delivered without using AI. See quotations.
Artificial Neural Network (ANN)
A learning model created to act like a human brain that solves tasks that are too difficult for traditional computer systems to solve.
Associative Memories
Associative memories work by recalling information in response to an information cue. Associative memories can be autoassociative or heteroassociative. Autoassociative memories recall the same information that is used as a cue, which can be useful to complete a partial pattern. Heteroassociative memories are useful as a memory. Human long-term memory is thought to be associative because of the way in which one thought retrieved from it leads to another. When we want to store a new item of information in our long term memory it typically takes us 8 seconds to store an item that can’t be associated with a pre-stored item, but only one or two seconds, if there is an existed information structure with which to associate the new item.
Automated Decision-Making (ADM)
Automated decision-making (ADM) involves the use of data, machines, and algorithms to make decisions in an array of contexts, such as public administration, business, health, education, law, employment, transport, media, and entertainment, all of which have varying degrees of human oversight or intervention.
Automated Diagnosis Systems
Most diagnosis work is done by expert humans such as mechanics, engineers, doctors, firemen, customer service agents, and analysts of various kinds. All of us usually do at least a little diagnosis even if it isn’t a major part of our working lives. We use a range of techniques for our diagnoses. Primarily, we compare a current situation with past ones, and reapply, perhaps with small modifications, the best past solutions. If this doesn’t work, we may run small mental simulations of possible solutions through our minds, based on first principles. We may do more complex simulations using first principles on paper or computers looking for solutions. Some problems are also amenable to quantitative solutions. We may hand off the problem to greater experts than ourselves, who use the same methods. The problem with humans doing diagnosis is that it often takes a long time and a lot of mistakes to learn to become an expert. Many situations just don’t reoccur frequently, and we may have to encounter each situation several time to become familiar with it. Automatic diagnosis systems can help avoid these problems, while helping humans to become experts faster. They work best in combination with a few human experts, as there are some diagnosis problems that humans are better at solving, and also because humans are more creative and adaptive than computers in coming up with new solutions to new problems.
Automated Scheduling
The process of automatically gathering business data and applying user-defined parameters like time frame constraints and location availability to create events for participants.
Automatic Target Recognition (ATR)
The ability for an algorithm or device to recognize targets or other objects based on data obtained from sensors.
Automation
The application of technology, programs, robotics, or processes in order to achieve outcomes with little to no human input.
Autonomous Agents
A piece of AI software that automatically performs a task on a human’s behalf, or even on the behalf of another piece of AI software, so together they accomplish a useful task for a person somewhere. They are capable of independent action in dynamic, unpredictable environments. “Autonomous agent” is a trendy term that is sometimes reserved for AI software used in conjunction with the Internet (for example, AI software that acts as your assistance in intelligently managing your e-mail).
Autonomous agents present the best hope from gaining additional utility from computing facilities. Over the past few years the term “agent” has been used very loosely. Our definition of a software agent is: “an intelligent software application with the authorization and capability to sense its environment and work in a goal directed manner.” Generally, the term “agent” implies “intelligence,” meaning the level of complexity of the tasks involved approaches that which would previously have required human intervention.
Autonomous Systems
An autonomous system is a large network or group of networks that have a unified routing policy.
B
Bayesian Networks
A modeling technique that provides a mathematically sound formalism for representing and reasoning about uncertainty, imprecision, or unpredictability in our knowledge. For example, seeing that the front lawn is wet, one might wish to determine whether it rained during the previous night. Inference algorithms can use the structure of the Bayesian network to calculate conditional probabilities based on whatever data has been observed (e.g., the street does not appear wet, so it is 90% likely that the wetness is due to the sprinklers). Bayesian networks offer or enable a set of benefits not provided by any other system for dealing with uncertainty – an easy to understand graphical representation, a strong mathematical foundation, and effective automated tuning mechanisms. These techniques have proved useful in a wide variety of tasks including medical diagnosis, natural language understanding, plan recognition, and intrusion detection. Also called belief networks, Bayes networks, or causal probabilistic networks.
Big Data
Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions.
C
Case-Based Reasoning
Case-based reasoning (CBR) solves a current problem by retrieving the solution to previous similar problems and altering those solutions to meet the current needs. It is based upon previous experiences and patterns of previous experiences. Humans with years of experience in a particular job and activity (e.g., a skilled paramedic arriving on an accident scene can often automatically know the best procedure to deal with a patient) use this technique to solve many of their problems. One advantage of CBR is that inexperienced people can draw on the knowledge of experienced colleagues, including ones who aren’t in the organization, to solve their problems. Synonym: Reasoning by analogy. Related Publication.
Chatbots
A chat robot (chatbot for short) that is designed to simulate a conversation with human users by communicating through text chats, voice commands, or both. They are a commonly used interface for computer programs that include AI capabilities.
Classification
Automated classification tools such as decision trees have been shown to be very effective for distinguishing and characterizing very large volumes of data. They assign items to one of a set of predefined classes of objects based on a set of observed features. For example, one might determine whether a particular mushroom is “poisonous” or “edible” based on its color, size, and gill size. Classifiers can be learned automatically from a set of examples through supervised learning. Classification rules are rules that discriminate between different partitions of a database based on various attributes within the database. The partitions of the database are based on an attribute called the classification label (e.g., “faulty” and “good”).
Clustering
Clustering is an approach to learning that seeks to place objects into meaningful groups automatically based on their similarity. Clustering, unlike classification, does not require the groups to be predefined with the hope that the algorithm will determine useful but hidden groupings of data points. The hope in applying clustering algorithms is that they will discover useful but unknown classes of items. A well-publicized success of a clustering system was NASA’s discovery of a new class of stellar spectra. See IQE, GIIF, WebMediator, Rome Graphics, and data mining for examples of applications that use clustering.
Cognitive Science
Artificial intelligence can be defined as the mimicking of human thought to perform useful tasks, such as solving complex problems. This creation of new paradigms, algorithms, and techniques requires continued involvement in the human mind, the inspiration of AI. To that end, AI software designers team with cognitive psychologists and use cognitive science concepts, especially in knowledge elicitation and system design.
Cognitive Task Analysis
Cognitive task analysis (CTA) is a systematic process by which the cognitive elements of task performance are identified. This includes both domain knowledge and cognitive processing. Thus, CTA focuses on mental activities that cannot be observed and is in contrast to behavioral task analysis that breaks the task down into observable, procedural steps. CTA is most useful for highly complex tasks with few observable behaviors. Examples of cognitive processing elements include: to decide, judge, notice, assess, recognize, interpret, prioritize, and anticipate. Examples of domain knowledge elements include concepts, principles, and interrelationships; goals and goal structures; rules, strategies and plans; implicit knowledge; and mental models.
The results from CTA have various applications such as identifying content to be included within training programs for complex cognitive tasks, research on expert-novice differences in terms of domain knowledge and cognitive processing during task performance, modeling of expert performance to support expert system design, and the design of human-machine interfaces.
Collaborative Filtering
A technique for leveraging historical data about preferences of a body of users to help make recommendations or filter information for a particular user. Intuitively, the goal of these techniques is to develop an understanding of what may be interesting to a user by uncovering what is interesting to people who are similar to that user. See GIIF and IQE for examples of applications that use collaborative filtering techniques.
Commonsense Reasoning
Ordinary people manage to accomplish an extraordinary number of complex tasks just using simple, informal thought processes based on a large amount of common knowledge. They can quickly plan and undertake a shopping expedition to six or seven different shops, as well as pick up the kids from soccer and drop a book back at the library, quite efficiently without logically considering the hundreds of thousands of alternative ways to plan such an outing. They can manage their personal finances, or find their way across a crowded room dancing without hitting anyone, just using commonsense reasoning. Artificial intelligence is far behind humans in using such reasoning except for limited jobs, and tasks that rely heavily on commonsense reasoning are usually poor candidates for AI applications.
Computer Vision
Making sense of what we see is usually easy for humans, but very hard for computers. Practical vision systems to date are limited to working in tightly controlled environments. Synonym: machine vision
Constraint Satisfaction
Constraints are events, conditions or rules that limit our alternatives for completing a task. For example, the foundation of a building has to be laid before the framing is done; a car has to be refueled once every four hundred miles, a neurosurgeon is needed to perform brain surgery, or a Walkman can only operate on a 9-volt battery.
Satisfying constraints is particularly important in scheduling complex activities. By first considering applicable constraints, the number of possible schedules to be considered in a search for an acceptable schedule can be reduced enormously, making the search process much more efficient. Constraint satisfaction techniques can be used to solve scheduling problems directly. Constraint satisfaction algorithms include heuristic constraint- based search and annealing.
Convolutional Neural Network (CNN)
A type of neural networks that identifies and makes sense of images.
Critical Chain Project Management (CCPM)
Critical chain project management (CCPM) is a project management methodology that helps you keep track of essential resources and prioritize dependent tasks within a project, ensuring you can complete projects as efficiently as possible.
Critical Chain Scheduling (CCS)
Critical chain scheduling is a methodology that focuses on resource-levelling. Despite the fact that dependent tasks generally define project timelines, resource utilization plays an integral role.
D
Data Fusion
Information processing that deals with the association, correlation, and combination of data and information from single and multiple sources to achieve a more complete and more accurate assessment of a situation. The process is characterized by continuous refinement of its estimates and assessments, and by evaluation of the need for additional sources, or modification of the process itself, to achieve improved results.
Data Mining
The non-trivial process of uncovering interesting and useful relationships and patterns in very large databases to guide better business and technical decisions. Data mining is becoming increasingly important due to the fact that all types of commercial and government institutions are now logging huge volumes of data and now require the means to optimize the use of these vast resources. The size of the databases to which data mining techniques are applied is what distinguishes them from more traditional statistical and machine learning approaches, which can be computationally costly. Data mining forms part of the overall process of ‘Knowledge Discovery in Databases.’ Data mining is preceded by the preliminary stages of preparing and cleaning up the data, and followed by the subsequent incorporation of other relevant knowledge, and the final interpretation. See all the data mining projects for examples of use of data mining techniques.
Data Science
An interdisciplinary field that combines scientific methods, systems, and processes from statistics, information science, and computer science to provide insight into phenomenon via either structured or unstructured data.
Decision Aids
Software that helps humans make decisions, particularly about complex matters when a high degree of expertise is needed to make a good decision.
Decision-Centered Design
Decision-centered design emphasizes the use of cognitive task analysis methods to uncover expertise and decision requirements. It advocates for designs that focus on difficult decisions and unexpected situations rather than routine operations. While it focuses on identifying key decisions rather than exhaustively documenting all possible cognitive requirements, decision-centered design also recognizes that individual differences in expertise play an important role in decision making. Two methods of determining design requirements are critical decision method (CDM) interviewing and concept mapping. For example, they can be used in the design of a crew position aboard a surveillance aircraft and the redesign of anti-air warfare positions in the combat information center of naval vessels.
Decision Support
Decision support is a broad class of applications for artificial intelligence software. There are many situations when humans would prefer machines, particularly computers, to either automatically assist them in making decisions, or actually make and act on a decision. There are a wide range of non-AI decision support systems such as most of the process control systems successfully running chemical plants and power plants and the like under steady state conditions. However, whenever situations become more complex—for example, in chemical plants that don’t run under steady state, or in businesses when both humans and equipment are interacting—intelligent decision support is required. That can only be provided by automatic decision support software using artificial intelligence techniques. Stottler Henke has created a wide range of decision support applications that provide examples of such situations. Synonym: intelligent decision support.
Decision Theory
Decision theory provides a basis for making choices in the face of uncertainty, based on the assignment of probabilities and payoffs to all possible outcomes of each decision. The space of possible actions and states of the world is represented by a decision tree.
Decision Trees
A decision tree is a graphical representation of a hierarchical set of rules that describe how one might evaluate or classify an object of interest based on the answers to a series of questions. For instance, a decision tree can codify the sequence of tests a doctor might take in diagnosing a patient. Such a decision tree will order the tests based on their importance to the diagnostic task. The result of each successive test dictates the path you take through the tree and therefore the tests (and their order) that will be suggested. When you finally reach a node in which there is no further tests are suggested, the patient has been fully diagnosed. Decision trees have the advantage of being easy to understand because of their hierarchical rule structure, and explanations for their diagnoses can be readily and automatically generated.
Decision trees can be automatically developed from a set of examples and are capable of discovering powerful predictive rules even when very large numbers of variables are involved. These algorithms operate by selecting the test that best discriminates amongst classes/diagnoses and then repeating this test selection process on each of the subsets matching the different test outcomes (e.g., “patients with temperatures greater than 101ºF” and “patients with temperatures less than or equal to 101ºF”). This process continues until all the examples in a particular set have the same class/diagnosis.
Deep Learning
The ability for machines to autonomously mimic human thought patterns through artificial neural networks composed of cascading layers of information.
Dependency Maintenance
Dependency maintenance is the technique of recording why certain beliefs are held, decisions were made, or actions were taken, in order to facilitate revising those decisions, actions, or beliefs in the face of changing circumstances. Several families of truth maintenance systems have been developed to facilitate dependency maintenance in particular kinds of situations (e.g. need to consider many alternate scenarios versus a single scenario, frequency with which assumptions change, etc).
Document Clustering
With document clustering techniques, documents can be automatically grouped into meaningful classes so that users of a database of full-text documents can easily search through related documents. Finding individual documents from amongst large on-line, full-text collections has been a growing problem in recent years due to the falling price of computer storage capacity and the networking of document databases to large numbers of people. Traditional library indexing has not provided adequate information retrieval from these large sources. The techniques for document clustering generally involve some natural language processing along with a collection of statistical measures.
Domain
An overworked word for AI people. “Domain” can mean a variety of things including a subject area, field of knowledge, an industry, a specific job, an area of activity, a sphere of influence, or a range of interest, e.g., chemistry, medical diagnosis, putting out fires, operating a nuclear power plant, planning a wedding, diagnosing faults in a car. Generally, a domain is a system in which a particular set of rules, facts, or assumptions operates. Humans can usually easily figure out what’s meant from the context in which “domain” is used; computers could probably not figure out what a human means when he or she says “domain.”
Domain Expert
The person who knows how to perform an activity within the domain, and whose knowledge is to be the subject of an expert system. This person’s or persons’ knowledge and method of work are observed, recorded, and entered into a knowledge base for use by an expert system. The domain expert’s knowledge may be supplemented by written knowledge contained in operating manuals, standards, specifications, computer programs, etc., that are used by the experts. Synonym: subject-matter expert (SME).
E
Emergence
Emergence is the phenomenon of complex patterns of behavior arising out of the myriad interactions of simple agents, which may each operate according to a few simple rules. To put it another way, an emergent system is much more than simply the sum of its parts. It can happen without any grand master outside the system telling the individual agents how to behave. For example, all the people in a modern city acting in their individual capacities as growers, processors, distributors, sellers, buyers, and consumers of food collectively create a food market matching supply and demand of thousands of different items, without an overall plan. An ant colony provides another example of simple agents, each operating according to a few simple rules, producing a larger system that finds food, provides shelter and protection for its members. Artificial intelligence software running on powerful computers can demonstrate useful emergent behavior as well, such as that demonstrated in automatic scheduling software that creates near-optimal schedules for complex activities subject to many constraints.
Expert System
An expert system encapsulates the specialist knowledge gained from a human expert (such as a bond trader or a loan underwriter) and applies that knowledge automatically to make decisions. For example, the knowledge of doctors about how to diagnose a disease can be encapsulated in software. The process of acquiring the knowledge from the experts and their documentation and successfully incorporating it in the software is called knowledge engineering, and requires considerable skill to perform successfully. Applications include customer service and helpdesk support, computer or network troubleshooting, regulatory tracking, autocorrect features in word processors, document generation such as tax forms, and scheduling.
Explainability
Explainability aims to answer stakeholder questions about the decision-making processes of AI systems and has been identified by the U.S. government as a key tool for developing trust and transparency within these systems.
F
Fuzzy Logic
Traditional Western logic systems assume that things are either in one category or another. Yet in everyday life, we know this is often not precisely so. People aren’t just short or tall, they can be fairly short or fairly tall, and besides we differ in our opinions of what height actually corresponds to tall, anyway. The ingredients of a cake aren’t just not mixed or mixed, they can be moderately well mixed. Fuzzy logic provides a way of taking our commonsense knowledge that most things are a matter of degree into account when a computer is automatically making a decision. For example, one rice cooker uses fuzzy logic to cook rice perfectly even if the cook put in too little water or too much water.
Fuzzy Sets
In mathematics, Fuzzy sets are sets whose elements have degrees of membership. Fuzzy sets were introduced by Lotfi A. Zadeh and Dieter Klaua in 1965 as an extension of the classical notion of set. At the same time, Salii (1965) defined a more general kind of structures called L-relations, which were studied by him in an abstract algebraic context. Fuzzy relations, which are used now in different areas, such as linguistics (De Cock, et al, 2000), decision-making (Kuzmin, 1982) and clustering (Bezdek, 1978), are special cases of L-relations when L is the unit interval [0, 1].
In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.
G
Game Theory
Game theory is a branch of mathematics that seeks to model decision making in conflict situations.
Genetic Algorithms
Search algorithms used in machine learning which involve iteratively generating new candidate solutions by combining two high scoring earlier (or parent) solutions in a search for a better solution. So named because of its reliance on ideas drawn from biological evolution.
Granularity
Refers to the basic size of units that can be manipulated. Often refers to the level of detail or abstraction at which a particular problem is analyzed. One characteristic of human intelligence, Jerry R. Hobbs has pointed out, is the ability to conceptualize a world at different levels of granularity (complexity) and to move among them in considering problems and situations. The simpler the problem, the coarser the grain can be and still provide effective solutions to the problem.
H
Heterogeneous Databases
Databases that contain different kinds of data, e.g, text and numerical data.
Heuristic
An heuristic is commonly called a rule of thumb. That is, an heuristic is a method for solving a problem that doesn’t guarantee a good solution all the time, but usually does. The term is attributed to the mathematician, George Polya. An example of an heuristic would be to search for a lost object by starting in the last place you can remember using it.
Human-Centered Computing
Computers and other machines should be designed to effectively serve people’s needs and requirements. All too often they’re not. Commonly cited examples of this are the difficulty people have in setting up their VCR to record a TV show; and the difficulties people have in setting up a home computer facility, or hooking up to the Internet. Artificial intelligence software can be used to deliver more human-centered computing, improving system usability, extending the powerfulness of human reasoning and enabling greater collaboration amongst humans and machines, and promoting human learning. A goal of human-centered computing is for cooperating humans and machines to compensate for each other’s respective weaknesses (e.g., machines to compensate for limited human short-term memory and the slowness with which humans can search through many alternative possible solutions to a problems; and for humans to compensate machines for their more limited pattern-recognition capability, language processing, and creativity) in support of human goals. Synonym: mixed initiative planning.
Hybrid Systems
Many of Stottler Henke’s artificial intelligence software applications use multiple AI techniques in combination. For example, case-based reasoning may be used in combination with model-based reasoning in an automatic diagnostic system. Case-based reasoning, which tends to be less expensive to develop and faster to run, may draw on an historical databases of past equipment failures, the diagnosis of those, and the repairs effected and the outcomes achieved. So CBR may be used to make most failure diagnoses. Model-based reasoning may be used to diagnose less common but expensive failures, and also to make fine adjustments to the repair procedures retrieved from similar cases in the case base by CBR.
I
Inference Engine
The part of an expert system responsible for drawing new conclusions from the current data and rules. The inference engine is a portion of the reusable part of an expert system (along with the user interface, a knowledge base editor, and an explanation system), that will work with different sets of case-specific data and knowledge bases.
Information Filtering
An information filtering system sorts through large volumes of dynamically generated information to present to the user those nuggets of information which are likely to satisfy his or her immediate needs. Information filtering overlaps the older field of information retrieval, which also deals with the selection of information. Many of the features of information retrieval system design (e.g. representation, similarity measures or boolean selection, document space visualization) are present in information filtering systems as well. Information filtering is roughly information retrieval from a rapidly changing information space.
Intelligent Entities
It is an entity that exhibits a significant degree of intelligence. It has an ability to reason, make plans, carry out plans, acquire knowledge, learn from its environment, manipulate its environment, and interact with other entities within its environment to some extent.
Intelligent Tutoring Systems
Encode and apply the subject matter and teaching expertise of experienced instructors, using artificial intelligence (AI) software technologies and cognitive psychology models, to provide the benefits of one-on-one instruction — automatically and cost-effectively. These systems provide coaching and hinting, evaluate each student’s performance, assess the student’s knowledge and skills, provide instructional feedback, and select appropriate next exercises for the student. See Stottler Henke case studies.
J
K
KAPPA
Rule-based object-oriented expert system tool and application developer (IntelliCorp Inc.). KAPPA is written in C, and is available for PCs. See AI Languages and Tools.
Knowledge-Based Planning
Knowledge-based planning represents the planner’s incomplete knowledge state and the domain actions. Actions are being modeled in terms of how they modify the knowledge state of the planner rather than in terms of how they modify the physical world. This approach scales better and supports features that make it applicable to much richer domains and problems.
Knowledge rich approaches, such as hierarchical task network planning, have advantages of scalability, expressiveness, continuous plan modification during execution, and the ability to interact with humans. However, these planners also have limitations, such as requiring complete domain models and failing to model uncertainty, that often make them inadequate for real-world problems.
Knowledge-Based Representations
The form or structure of databases and knowledge bases for expert and other intelligent systems, so that the information and solutions provided by a system are both accurate and complete. Usually involves a logically-based language capable of both syntactic and semantic representation of time, events, actions, processes, and entities. Knowledge representation languages include Lisp, Prolog, Smalltalk, OPS-5, and KL-ONE. Structures include rules, scripts, frames, endorsements, and semantic networks.
Knowledge-Based Systems
Usually a synonym for expert system, though some think of expert systems as knowledge-based systems that are designed to work on practical, real-world problems.
Knowledge Elicitation
Synonym: knowledge acquisition.
Knowledge Engineering
Knowledge engineering is the process of collecting knowledge from human experts in a form suitable for designing and implementing an expert system. The person conducting knowledge engineering is called a knowledge engineer.
Knowledge Fusion
See Data Fusion.
Knowledge Graphs
Also known as semantic networks, knowledge graphs represent a network of real-world entities, including objects, events, situations, or concepts, and illustrates the relationship between them.
Knowledge Management
Knowledge management (KM) is the process of capturing, developing, sharing, and effectively using organisational knowledge. It refers to a multi-disciplined approach to achieving organizational objectives by making the best use of knowledge. It includes courses taught in the fields of business administration, information systems, management, and library and information sciences. More recently, other fields have started contributing to KM research; these include information and media, computer science, public health, and public policy.
Knowledge management efforts typically focus on organizational objectives such as improved performance, competitive advantage, innovation, the sharing of lessons learned, integration and continuous improvement of the organization. KM efforts overlap with organizational learning and may be distinguished from that by a greater focus on the management of knowledge as a strategic asset and a focus on encouraging the sharing of knowledge. It is seen as an enabler of organizational learning and a more concrete mechanism than the previous abstract research.
Knowledge Representation
Knowledge representation is one of the two basic techniques of artificial intelligence, the other is the capability to search for end points from a starting point. The way in which knowledge is represented has a powerful effect on the prospects for a computer or person to draw conclusions or make inferences from that knowledge. Consider the representation of numbers that we wish to add. Which is easier, adding 10 + 50 in Arabic numerals, or adding X plus L in Roman numerals? Consider also the use of algebraic symbols in solving problems for unknown numerical quantities, compared with trying to do the same problems just with words and numbers.
L
LISP
LISP (short for list processing language) is a computer language invented by John McCarthy, one of the pioneers of artificial intelligence. The language is ideal for representing knowledge (e.g., If a fire alarm is ringing, then there is a fire) from which inferences are to be drawn.
Localization
The process of adapting and customizing a product so that it meets a specific market’s needs, as identified by its language, culture, expectations, local standards, and legal requirments. Related Publication.
M
An umbrella term that encompasses machine learning, deep learning, and classical learning algorithms.
Machine Learning
Machine learning refers to the ability of computers to automatically acquire new knowledge, learning from, for example, past cases or experience, from the computer’s own experiences, or from exploration. Machine learning has many uses such as finding rules to direct marketing campaigns based on lessons learned from analysis of data from supermarket loyalty campaigns; or learning to recognize characters from people’s handwriting. Machine learning enables computer software to adapt to changing circumstances, enabling it to make better decisions than non-AI software. Synonyms: learning, automatic learning. Related Publication.
Machine Perception
The ability for a system to receive and interpret data from the outside world similarly to how humans use our senses. This is typically done with attached hardware, though software is also usable.
Model-Based Reasoning
Model-based reasoning (MBR) concentrates on reasoning about a system’s behavior from an explicit model of the mechanisms underlying that behavior. Model-based techniques can very succinctly represent knowledge more completely and at a greater level of detail than techniques that encode experience, because they employ models that are compact axiomatic systems from which large amounts of information can be deduced.
Modeling
Synonym for simulation.
MRO Scheduling
MRO scheduling consists of “base maintenance” activities that need to be carried out. In particular, it focuses on scheduling maintenance work orders on the right resources at the right time and in the right sequence.
N
Natural Language Generation (NLG)
Driven by artifical intelligence, NLG is software process that produces natural written or spoken language from both structured and unstructured data, often helping computers provide users feed back in a human language that they can comprehend.
Natural Language Processing (NLP)
English is an example of a natural language, a computer language isn’t. For a computer to process a natural language, it would have to mimic what a human does. That is, the computer would have to recognize the sequence of words spoken by a person or another computer, understand the syntax or grammar of the words (i.e., do a syntactical analysis), and then extract the meaning of the words. A limited amount of meaning can be derived from a sequence of words taken out of context (i.e., by semantic analysis); but much more of the meaning depends on the context in which the words are spoken (e.g., who spoke them, under what circumstances, with what tone, and what else was said, particularly before the words), which would require a pragmatic analysis to extract. To date, natural language processing is poorly developed and computers are not yet able to even approach the ability of humans to extract meaning from natural languages; yet there are already valuable practical applications of the technology. Explore a recent publication.
Natural Language Understanding (NLU)
Natural language understanding (NLU) is a branch of artificial intelligence that takes advantage of computer software to understand input in the form of sentences using speech or text. One common form of NLU is known as parsing, that of which takes written text and converts it into a structured format that computers can understand.
Neural Networks
Neural networks are an approach to machine learning which developed out of attempts to model the processing that occurs within the neurons of the brain. By using simple processing units (neurons), organized in a layered and highly parallel architecture, it is possible to perform arbitrarily complex calculations. Learning is achieved through repeated minor modifications to selected neurons, which results in a very powerful classification system. A problem with neural networks is that it very difficult to understand their internal reasoning process, and therefore to obtain an explanation for any particular conclusion. They are best used, therefore, when the results of a model are more important than understanding how the model works. Neural network software is used to recognize handwriting, and also to control chemical processes to run at desired conditions. Other applications include stock market analysis, fingerprint identification, character recognition, speech recognition, credit analysis, scientific analysis of data, and in neurophysiological research. Neural networks are also known as neural nets, connectionism, and parallel associative memory.
O
Object-Oriented Programming
An object-oriented problem-solving approach is very similar to the way a human solves problems. It consists of identifying objects and the correct sequence in which to use these objects to solve the problem. In other words, object-oriented problem solving consists of designing objects whose individual behaviors, and interactions solve a specific problem. Interactions between objects take place through the exchange of messages, where a message to an object causes it to perform its operations and solve its part of the problem. The object-oriented problem solving approach thus has four steps: 1) identify the problem; 2) identify the objects needed for the solution; 3) identify messages to be sent to the objects; and 4) create a sequence of messages to the objects that solve the problem.
In an object-oriented system, objects are data structures used to represent knowledge about physical things (e.g., pumps, computers, arteries, any equipment) or concepts (e.g., plans, designs, requirements). They are typically organized into hierarchical classes, and each class of object has information about its attributes stored in instance variables associated with each instance in the class. The only thing that an object knows about another object is that object’s interface. Each object’s data and logic is hidden from other objects. This allow the developer to separate an object’s implementation from its behavior. This separation creates a “black-box” effect where the user is isolated from implementation changes. As long as the interface remains the same, any changes to the internal implementation is transparent to the user. Objects provide considerable leverage in representing the world in a natural way and in reusing code that operates on common classes of objects.
Ontology
A formal ontology is a rigorous specification of a set of specialized vocabulary terms and their relationships sufficient to describe and reason about the range of situations of interest in some domain.
In other words, it is a conceptual representation of the entities, events, and their relationships that compose a specific domain. Two primary relationships of interest are abstraction (“a cat is specific instance of a more general entity called animal”) and composition (“a cat has whiskers and claws”). Ontologies are generally used to model a domain of interest, permitting inferential and deductive reasoning by learning systems.
P
Pattern Recognition
The use of feature analysis to identify an image of an object. May involve techniques such as statistical pattern recognition, Bayesian analysis, classification, cluster analysis, and analysis of texture and edges. See machine vision.
Plan Recognition
The goal of plan recognition is to interpret an agent’s intentions by ascribing goals and plans to it based on partial observation of its behavior up to the current time. Divining the agent’s underlying plan can be useful for many purposes including: interpreting the agent’s past behavior, predicting the agent’s future behavior, or acting to collaborate with (or thwart) the agent.
Planning and Scheduling
Planning is the field of AI that deals with the synthesis of plans, which are partial orders of (possibly conditional) actions to meet specified goals under specified constraints. It is related to scheduling, which is the task of determining when and with what resources to carry out each member of a specific set of actions to satisfy constraints regarding ordering, effectiveness and resource allocation. In 1991, SHAI developed the concept of intelligent entities for planning and scheduling applications. Intelligent entities play the role of managers of various resources, groups of resources, tasks, and projects made up of tasks.
Planning and Scheduling Agents
Multiagent planning is concerned with planning by (and for) multiple agents. It can involve agents planning for a common goal, an agent coordinating the plans (plan merging) or planning of others, or agents refining their own plans while negotiating over tasks or resources. The topic also involves how agents can do this in real time while executing plans (distributed continual planning). Multiagent scheduling differs from multiagent planning the same way planning and scheduling differ: in scheduling often the tasks that need to be performed are already decided, and in practice, scheduling tends to focus on algorithms for specific problem domains.
Strategies or action sequences are typically executed by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
Programming by Demonstration
Programming by demonstration (PBD) is a term that describes a variety of end-user programming techniques that generate code from examples provided by the user. The motivation behind Programming by Demonstration is simple and compelling – if a user knows how to perform a task on the computer, that alone should be sufficient to create a program to perform the task. It should not be necessary to learn a programming language like C++ or BASIC. The most simple version of Programming by Demonstration is accomplished by Macro Recorders which provide users with a way to record their actions. The user issues the “Record” command, performs a series of actions, and then issues the “Stop” command.
Project Management
Project management involves the use of knowledge, skills, tools, and techniques to complete a series of tasks, ensuring the delivery of value and achievement of a desired outcome.
Project Management Software
Project management software is an umbrella term that refers to an array of platforms designed to help people manage projects, tasks, and schedules.
Prototyping
Prototyping is an important step in the development of a practical artificial intelligence application. An AI software prototype is usually a working piece of software that performs a limited set of the functions that the software designer envisages will be required by the user. It is used to convey to the users a clear picture of what is being developed to ensure that the software will serve the intended picture. An AI prototype, contrary to the practice with many other sorts of prototypes, is grown into the finished product, subject to changes at the request of the user. Unlike much other software, AI software cannot be subject to hard verification tests as it mirrors non-mathematical human reasoning, so the prototyping step provides necessary confirmation that the software will be “good enough” for its purpose at the expected cost.
Predictive Analytics
Predictive analytics is the use of historical data to predict or forecast future trends and events that can help drive strategic decisions.
Python
Python is a high-level programming language designed to be easy to read and simple to implement. It is open source, which means it is free to use, even for commercial applications. Python can run on Mac, Windows, and Unix systems and has also been ported to Java and .NET virtual machines.
Q
Qualitative Reasoning
Inexact reasoning, the opposite of quantitative reasoning. Also see Commonsense Reasoning.
R
Recommendation Systems
Also known as recommender systems, recommendation systems are a class of machine learning that utilize data to predict, specify, and find what people are looking for among an exponentially growing number of options.
Recurrent Neural Network (RNN)
A type of neural network that makes sense of sequential information and recognizes patterns, and creates outputs based on those calculations.
Reinforcement Learning
A machine learning (ML) technique which trains software to make decisions that lead to the most optimal results, often mimicking the trial-and-error learning process that humans use to accomplish their goals.
Relevance Feedback
Relevance feedback methods are used in information retrieval systems to improve the results produced from a particular query by modifying the query based on the user’s reaction to the initial retrieved documents. Specifically, the user’s judgments of the relevance or non-relevance of some of the documents retrieved are used to add new terms to the query and to reweight query terms. For example, if all the documents that the user judges as relevant contain a particular term, then that term may be a good one to add to the original query.
Resource Scheduling
Resource scheduling consists of identifying when project resources are needed and allocating them based on capacity planning and resource availability. This process guarantees that there is no over- or under-allocation of resources, ensuring projects get done on time and budget.
Robotics
Robotics can be defined as the intersection of science, technology, and engineering devoted to the design, construction, and use of mechnical robots for the purpose of replicating or substituting human actions.
Rule-Based System
An expert system based on IF-THEN rules for representing knowledge.
S
Signal Filtering
Signal filtering is a techniques for removing the noise or static from a signal so the clear or underlying signal remains. This is a conventional technique commonly used by electrical engineers and others.
Sentiment Analysis
The process of analyzing digital text to determine whether the emotional tone of the message is positive, negative, or neutral.
Simulated Annealing
Simulated annealing is an optimization method based on an analogy with the physical process of toughening alloys, such as steel, called annealing. Annealing involves heating an alloy and cooling it slowly to increase its toughness. In simulated annealing, an artificial “temperature” is used to control the optimization process, of finding the overall maximum or minimum of a function. As cooling a metal slowly allows the atoms time to move to the optimum positions for toughness, giving time to look for a solution in simulated annealing permits a successful search for the global optimum and avoids being trapped at a local suboptima. It is used, for example, to optimize routing of planes by airlines for most efficient use of the fleet. It was devised by S. Kirkpatrick.
Simulation
A simulation is a system that is constructed to work, in some ways, analogously to another system of interest. The constructed system is usually made simpler than the original system so that only the aspects of interest are mirrored. Simulations are commonly used to learn more about the behavior of the original system, when the original system is not available for manipulation. It may not be available because of cost or safety reasons, or it may not be built yet and the purpose of learning about it is to design it better. If the purpose of learning is to train novices, then cost, safety, or convenience are likely to be the reasons to work on a simulated system. The simulation may be a computer simulation (perhaps a realistic one of a nuclear power station’s control room, or a mathematical one such as a spreadsheet for “what-if” analysis of a company’s business); or it may be a small-scale physical model (such as a small-scale bridge, or a pilot chemical plant).
Statistical Learning
Statistical learning techniques attempt to construct statistical models of an entity based on surface features drawn from a large corpus of examples. These techniques generally operate independent of specific domain knowledge, training instead on a set of features that characterize an input example. In the domain of natural language, for example, statistics of language usage (e.g., word trigram frequencies) are compiled from large collections of input documents and are used to categorize or make predictions about new text. Systems trained through statistical learning have the advantage of not requiring human-engineered domain modeling. This strong dependence on the input corpus has the disadvantage of limiting their applicability to new domains, requiring access to large corpora of examples and a retraining step for each domain of interest. Statistical techniques thus tend to have high precision within a domain at the cost of generality across domains.
Structural Pattern Recognition
It is a form of pattern recognition, in which each object can be represented by a variable-cardinality set of symbolic, nominal features. This allows for representing pattern structures, taking into account more complex interrelationships between attributes than is possible in the case of flat, numerical feature vectors of fixed dimensionality, that are used in statistical classification.
One way to present such structure is by means of a strings of symbols from a formal language. In this case the differences in the structures of the classes are encoded as different grammars. A second way to represent relations are graphs, where nodes are connected if corresponding subpatterns are related. Structural methods provide description of items, which may useful on its own right. For example, syntactic pattern recognition can be used to find out what object are present in an image. Furthermore, structural methods are strong in finding a correspondence mapping between two images of an object. Under natural conditions, corresponding features will be in different positions and/or may be occluded in the two images, due to camera-attitude and perspective, as in face recognition. A graph-matching algorithm will yield the optimal correspondence.
Supervised Learning
Organization and training of a neural network by a combination of repeated presentation of patterns, such as alphanumeric characters, and required knowledge. An example of required knowledge is the ability to recognize the difference between two similar characters such as O and Q. Synonym: learning with a teacher. Contrast with self-organized system; unsupervised learning.
Swarm behavior
From the perspective of the mathematical modeler, it is an emergent behavior arising from simple rules that are followed by individuals and does not involve any central coordination.
T
Tactical Diagrams
Tactical diagrams increase, reduce, maintain specific levels related to the objectives. Tactics can be viewed as more concrete strategies of smaller scope and greater specificity
Task Transition Diagrams
The state of an activity instance changes when a significant step in the execution of the activity instance occurs. The states and the state transitions depend on the type of activity so they are important in the life cycle of basic activities. In contrast to the state diagrams for process instances, activity end states are not explicitly exposed. The life cycle of an activity depends on the enclosing process and activities are always deleted with the process instance.
Time Series Analysis
A time series is a sequence of observations of a particular variable over time (e.g., the daily closing level of Dow Jones Industrial Average). There are a wide range of statistical and temporal data mining techniques for analyzing such data. Two common uses for this type of analysis are forecasting future events (i.e., time series prediction) and searching a database of previous patterns for sequences that are similar to a particular pattern of interest. This is a conventional statistical technique.
Toy System
Small-scale implementation of a concept or model useful for testing a few main features, but unsuitable for complex or real-world problems. For example, a toy rule-based system may contain a few rules to construct an arch out of a number of pre-selected wooden blocks. It is a useful academic approach to unsolved problems. It is not employed in producing practical, real-world solutions.
Truth Maintenance Systems
Many conventional reasoning systems assume that reasoning is the process of deriving new knowledge from old, i.e., the number of things a person or intelligent software believes increases without retracting any existing knowledge, since known truths never change under this form of logic. This is called monotonic logic. However, this view does not accurately capture the way in which humans think since our actions constantly change what we believe to be true. Humans reason nonmonotonically, which means they reason based on incomplete or uncertain information, common sense, default values, changing conditions, and other assertions subject to retraction or revision. Truth maintenance systems seek to emulate the human reasoning process by recording reasons for our beliefs and reasons for retraction or revision of those beliefs, as well as the beliefs themselves. They are particularly useful in keeping track of goals, sub-goals, decisions, tasks, assignments, and design documents on complex projects (such as the design, construction, and testing of a major commercial aircraft) being undertaken by large numbers of people who may work for different organizations in different parts of the world. This is the sort of situation where a decision may be reversed, and all the people who may have to react to that change may not be properly informed. Project management software using TMS can help avoid design problems or wasted effort that can result from this sort of oversight. Also known as Reason Maintenance Systems.
U
A type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis.
References:
- Alan Bundy (ed.), “Artificial Intelligence Techniques: A Comprehensive Catalog,” 4th edition, Springer, 1997.
- Artificial Intelligence Information from PC AI magazine
- Free On-Line Dictionary of Computing