Large problems are usually solved by first breaking them up into a set of smaller problems. It is also useful to know where to go to find methods, algorithms, etc. that may be useful in your AI work. No list of subfields is ever complete and unique but here is one I use:
1. weak methods
2. search
3. rule based systems
4. semantic networks
5. logic/deduction systems
6. heuristics
7. discovery/creativity/induction
8. natural language
9. neural networks
10. distributed AI/collective intelligence
11. robotics/embodiment
12. compression
13. automata/state machines
14. statistics
15. Bayesian statistics
16. planning/scheduling
17. case-based reasoning/memory-based reasoning
18. blackboard systems
19. nonstandard logics (including temporal logic)
20. representation
21. consciousness
22. learning/data mining
23. theorem proving
24. automatic programming
25. genetic programming
26. qualitative reasoning
27. constraint-based reasoning
28. agents
29. fuzzy logic
30. diagrammatic reasoning (including spatial logics)
31. model-based reasoning
32. emotion
33. ontology
34. quantum computing
35. analogy
36. parallel computing
37. pattern recognition/comparison
38. causality
39. deductive databases
40. language of thought
41. artificial life
42. philosophy of AI and mind
43. innateness/instinct
44. AI languages
45. memory/databases
46. decision theory
47. cognitive science
48. control system theory
49. digital electronics/hardware
50. dynamical systems
51. self-organizing systems
52. perception/vision/image manipulation
53. architectures
54. complexity theory
55. emergence
56. brain modeling
57. modularity
58. hybrid AI
59. optimization
60. goal-oriented systems
61. feature extraction/detection
62. utility/values/fitness/progress
63. multivariate function approximation
64. formal grammars and languages
65. theory of computation
66. classifiers/concept formation
67. theory of problem solving
68. artificial immune systems
69. curriculum for learners
70. speech recognition
71. theory of argumentation/informal logic
72. common sense reasoning
73. coherence/consistency
74. relevance/sensitivity analysis
75. semiotics
76. machine translation
77. pattern theory
78. operations research
79. game theory
80. automation
81. behaviorism
82. knowledge engineering
83. semantic web
84. sorting/typology/taxonomy
85. extrapolation/forecasting/interpolation/generalization
86. cooperation theory
87. systems theory
There is, of course, lots of overlap between these. Some are, of course, more fundamental to AI than others.
Natural Language Processing, Machine Conciousness, Computational Creativity, Robo-ethics, Pattern Recognition (including audio and visual recognition), Agents, Intelligent Tutoring Interfaces...the list goes on and on.
AI research has proven to be the breeding ground for computer science subdisciplines such as pattern recognition, image processing, neural networks, natural language processing, and game theory.
Robotics,Computer Vision,Image Processing,Voice recognition,Neural Networks,
Intelligence testing spawned a new avenue in the study of psychology known as
Association for the Advancement of Artificial Intelligence was created in 1979.
Journal of Artificial Intelligence Research was created in 1993.
Electronic Transactions on Artificial Intelligence was created in 1997.
Artificial Intelligence II was created on 1994-05-30.
A.I. Artificial Intelligence was created on 2001-06-29.
A.I. Artificial Intelligence - album - was created in 2001.
artificial intelligenge scope in ondia
Nils J. Nilsson has written: 'Learning machines' -- subject(s): Artificial intelligence 'The mathematical foundations of learning machines' -- subject(s): Artificial intelligence, Machine learning 'Artificial Intelligence' -- subject(s): Artificial intelligence
Artificial intelligence (AI) has many limitations, such as: Lack of creativity: AI isn't able to invent things, it can make recommendations based on data that already exists. AI is also incapable of applying common sense logic to new circumstances. Cost: AI can be expensive to develop and implement. Lack of trust: AI systems could not be completely trustworthy all the time, which could cause people to doubt their ability to make decisions. Unreliable results: AI systems may not always be fully reliable. Other limitations of AI include: Bias in algorithmic, No ethics and emotionless, Adversarial attacks, and Limited understanding of context.
The meaning of LISP in artificial intelligence means Locator Identifier Separation Protocol.