This is a very good question. Models are usually the result of quantifying concepts we believe to be true. If additional data is collected, it is expected that the model (or representation) is improved, but this is not always the case. Sometimes, new data will lead us to re-consider basic concepts and change the model. I will use an example. Suppose we have a model that allows us to predict earthquakes everywhere on earth, but an earthquake occurs that we did not expect. This new data tells us that there was something wrong with our model, and it needs to be changed.