Video Games

Is 20Q a Neural Network?


Top Answer
User Avatar
Wiki User
2009-08-25 20:26:53
2009-08-25 20:26:53

20Q is a true neural network. The answers to questions stimulate target nodes (objects), which in turn stimulate the next question to ask. The "brain" is about as complex as an insect's brain.

User Avatar

Related Questions

none its all in youre head! - Actually it could be considered a simple expert system.

The word neural is associated with the brain. A neural network is a computer system or interface that is linked to the brain or neurological system.

momentum neural network

Advantages and disadvantages of Artificial Neural NetworkAdvantages:· A neural network can perform tasks that a linear program cannot.· When an element of the neural network fails, it can continue without any problem by their parallel nature.· A neural network learns and does not need to be reprogrammed.· It can be implemented in any application and without any problem.Disadvantages:· The neural network needs training to operate.· The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.· Requires high processing time for large neural networks.

the neural networks need training to operate. the architecture of a neural network is different from the architecture of microprocessor therefore needs to be emulated.

Neural networks have nothing to do with neutrons.

can some one answer this Q.

By forming an neural network

question: what 'structure' transduces light into neural impulses? answer: a life form's 'virtual' optical neural network

A neural network is basically something like an attempt to stimulate the brain. Artificial Intelligence uses machines and software to stimulate the brain.

How does the olfactory neural network working?

Neural Network Protocol is the full form of NNP.

I'm not sure how to construct an artificial neutral network.

MMNN stands for multi-modular neural network. It could also stand for Map Mole News Network.

The learning rate is a constant in the algorithm of a neural network that affects the speed of learning. It will apply a smaller or larger proportion of the current adjustment to the previous weight. The higher the rate is set, the faster the network will learn, but if there is large variability in the input the network will not learn very well if at all.

outline two types bof neural network that you might find in the visual system and state the advantage of each. I would appreciate if s.o. might help me with this question. Thanks

local minima generalization/overfitting hard to interpret

Frame 72 is labeled as a Malformed Packet. What does this mean?

It depends on the context and application. A neural network is a network fashioned after the brain. Where pathways are opened to trigger responses from multiple "data centers" in the brain, based on stimulus. A LAN is nothing like it, other than the similarity that it has a transmission medium. Yet a LAN is useless without a brain.

1. They are black box - that is the knowledge of its internal working is never known 2. To fully implement a standard neural network architecture would require lots of computational resources - for example you might need like 100,000 Processors connected in parallel to fully implement a neural network that would "somewhat" mimic the neural network of a cat's brain - or I may say its a greater computational burden 3. Remember the No Free Lunch Theorem - a method good for solving 1 problem might not be as good for solving some other problem - Neural Networks though they behave and mimic the human brain they are still limited to specific problems when applied 4. Since applying neural network for human-related problems requires Time to be taken into consideration but its been noted that doing so is hard in neural networks 5. The Vapnik-Chervonenkis dimension or VC Dimension of a neural network which is a combinatorial parameter that measures the expressive power of a neural network is still not well understood 6. They are just approximations of a desired solution and errors in them is inevitable 7. Lastly I will add that they require a large amount training set to be trained properly and to give output(s) that would be close enough to the desired output but knowing what amount of training set is enough for a desired output would be totally dependent on the trainer itself - but yes its important that a very large training set is provided so that the neural network would have sufficient understanding of the underlying structure.

20Q - 2009 1-1 was released on: USA: 13 June 2009

20Q - 2009 1-2 was released on: USA: 20 June 2009

20Q - 2009 1-3 was released on: USA: 27 June 2009

20Q - 2009 1-4 was released on: USA: 11 July 2009

Copyright ยฉ 2020 Multiply Media, LLC. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply.