I assume you are asking this because you may be working with something to do with a research project, a term paper or some sort of research proposal. Validity is when you are referring to whether or not the source of the information is actually one that can give you insight into your question or whatever you are looking for. For instance, if you were doing a research project about frogs then a book about the history of automobiles would not be relevant at all, in fact, it would be not valid. you have to consider if your experiment or method would be one in which you can obtain good data that is relevant. Reliability on the other hand is when the source can be consistently trusted to give the same results. For instance a science experiment would be considered reliable if it would give consistent results every time it was reproduced. So if come up with an experiment that sometimes gives you one result, then other times another and maybe even other times even another result then your experiment is not reliable.
Remember validity and reliability can however hold slightly different meanings depending on what context you are applying them to.
Good luck:)
it means that it can be changed , meaning during an experiment , it will change depending on the actual data you have , so you take the information you have and then you will think and say .... what did the experimenter do ? what did he do ? did his data/information make sense ? so you take a second and you will right down anything that is not normal about the data , and then you will right it down .
How consistently a method assesses something is referred to as its reliability. The measurement is regarded as reliable if the same result can be consistently obtained by applying the same techniques under the same conditions. A liquid sample's temperature is measured numerous times under the same circumstances.
data that is reliable...
suit
It's the definition of theory.
It is reliable
in developing project time data transcription is input stage.
Anything less and the sample size would be too small for reliable data
Normalizing data If by "normalizing data" is meant the process by which data is transformed so that it more closely approximates a normal distribution, one method is to take the logarithm of the individual data points to the base 10. If by "normalizing data" is meant the process by which data is transformed so that it can be compared with other data from a different scale (standardization), one method is to convert the individual data points to Z scores. Z scores have a mean of zero. The individual data points are converted to numbers that are multiples or fractions of one standard deviation (SD). A datum that is equal to the mean gets a Z score of zero. A datum that is 1.5 SD above the mean gets a Z score of +1.5. A datum that is half a SD below the mean gets a Z sore of -0.5. Data Z score 60 -1.39 65 -1.04 70 -0.69 80 0.00 90 0.69 95 1.04 100 1.39 Mean: 80.0 SD: 14.4 The lefthand column is the raw data. The mean is 80, and the SD is 14.4. The Z scores -- the standardized data -- based on that mean and SD are in the righthand column. {| |}
what are three features of reliable data?
It is data you can trust
Reliable data is trusted data that can be used without doubt. Replicable data is data that is allowed to be copied or can be duplicated.
The mean is used for evenly spread data, and median for skewed data. Not sure when the mode should be used.
it is quantitative
A reliable data transfer is one that is stable . For a reliable data transfer a connection oriented protocol is used. A connection oriented protocol is one that receives acknowledgement from the receiver before sending the further data.
How accurate data is in the sense that you've repeated an experiment a number of times. I.e., one would answer the question 'how reliable were your results?' with something like 'they were very reliable as the experiment was repeated 67 times'.
it is quantitative
It is better than keeping unreliable data!
data integrity
false
data integrity