answersLogoWhite

0

suma lang nimo. amaw ka? na pariha ta

User Avatar

Wiki User

12y ago

What else can I help you with?

Continue Learning about Other Engineering

What is maintenance error?

Maintenance error refers to mistakes made during the maintenance process of equipment, machinery, or systems, which can lead to operational failures, safety hazards, or increased costs. These errors can arise from various factors, including inadequate training, poor communication, lack of proper procedures, or human oversight. Effective maintenance practices and protocols are essential to minimize such errors and ensure safety and reliability in operations.


In the engineering design process why would you need to repeat some steps in the process?

To have little room for errors.


What is Human Factors Engineering?

Human Factors Engineering is the discipline of applying what is known about human capabilities and limitations to the design of products, processes, systems, and work environments. It can be applied to the design of all systems having a human interface, including hardware and software. Its application to system design improves ease of use, system performance and reliability, and user satisfaction, while reducing operational errors, operator stress, training requirements, user fatigue, and product liability.


Why Software doesn't wear out explain?

Hardware Failure Rates The illustration below depicts failure rate as a function of time for hardware. The relationship, often called the "bathtub curve," indicates the typical failure rate of individual components within a large batch. It shows that in say a batch of 100 products, a relatively large number will fail early on before settling down to a steady rate. Eventually, age and wear and tear get the better of all them and failure rates rise again near the end of the products life. To assist in quality control, many new batches of products are 'soak' tested for maybe 24 hours in a hostile environment (temperature/humidity/variation etc.) to pinpoint those that are likely to fail early on in their life, this also highlights any inherent design/production weaknesses. These early failure rates can be attributed to two things • Poor or unrefined initial design. Correcting this, results in much lower failure rates for successive batches of the product. • Manufacturing defects i.e. defects in the product brought about by poor assembly/materials etc. during production. Both types of failure can be corrected (either by refining the design, or by replacing broken components out in the field), which lead to the failure rate dropping to a steady-state level for some period of time. As time passes, however, the failure rates rise again as hardware components suffer from the cumulative effects of dust, vibration, abuse, temperature extremes and many other environmental maladies. Stated simply, "…The hardware begins to wear out."Software Engineering Topic 1 Page 10 Software Failure Rates Software is not susceptible to the same environmental problems that cause hardware to wear out. In theory, therefore, the failure rate curve for software should take the form shown below. Undiscovered defects in the first engineered version of the software will cause high failure rates early in the life of a program. However, these are corrected (hopefully without introducing other errors) and the curve flattens as shown. The implication is clear. Software doesn't wear out. However, it does deteriorate with maintenance as shown below. During its life, software will undergo changes and it is likely that some new defects will be introduced as a result of this, causing the failure rate curve to spike as shown above. Before the curve can return to the original steady-state failure rate (i.e. before the new bugs have been removed), another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level begins to rise-- the software is deteriorating due to change. Thanks & Regards, Bastin Vinoth NG


What is engineering notation?

Scientific notation is a way to "easily" or "conveniently" write very large or very small numbers. As these numbers are frequently encountered in the sciences, the term scientific notation was introduced to name this "neat" way to "package" these quantities so that they might be more easily grasped and understood.Scientific notation is a useful way of dealing with very large and very small numbers. It allows them to be presented in a form where their magnitude can be seen more easily. Also it can simplify calculations by allowing you to concentrate on the significant digits rather than the orders of magnitude which are very easily dealt with. This latter advantage has somewhat diminished with the widespread availability of calculators and computers. But previously, people used log tables and slide rules for multiplication and division. These calculating devices depended on thinking of numbers in their scientific notation and utilizing the significant digits.The Form of Scientific NotationThe idea behind scientific notation is to write numbers in terms of powers of ten - either positive (for very large numbers), or negative (for very small ones). As an example, consider the mass of an electron, which is approximately 0.0000000000000000000000000001 grams. An easier way to write it uses the significant digit 1 and an exponent based on a multiple of ten. The number becomes the easily represented 1 x 10-28 g.The simple rule is to take your "numbers" and move the decimal point to the left or right so that only one figure is to the left of the decimal. Then write the rest of the significant digits to the right of the decimal, and tack on the appropriate power of ten (again, either positive or negative) to restore the proper value to the figure.Coefficient and Base in Scientific NotationScientific Notation also avoids the headache and potential errors of counting lots of zeros.The number 123000000 in scientific notation is written as:1.23 x 108The first number 1.23 is called the coefficient. It is always a single digit followed by a decimal point and then the rest, but usually only two digits.The second number is called the base and in scientific notation must always be 10. In the number 1.23 x 108 the number 8 is the exponent or power of ten.How to Write a Number in Scientific NotationFor large numbers :1) Put the decimal after the first digit and drop the zeroes. In the number 123,000,000 the coefficient will be 1.232) Then write the times "x" and the base 10.3) To find the exponent count the number of places from the "new" decimal point to the end of the number. In 123000000 there are 8 places. Therefore the exponent is 8.There are some minor variations that have evolved to fill different needs, usually because not all fonts or printers allow superscripts: 123000000 can be written as:1.23 E+11 or 1.23 X 10^11 or 1.23 x 1011For small numbers :For numbers less than one we use a similar approach. These numbers all have negative exponents. For example 0.00000123 second (1.23 microseconds) is written:1.23 E-6 or 1.23 x 10^-6 or 1.23 x 10-6Take the original number 0.00000123 and shift the decimal point to the right until you get the coefficient in proper form, as above. The number of digits shifted is then the negative exponent.Notes:a) Numbers less than one all use negative exponents, but what about negative numbers, such as -0.04? We can write this as-4.0 x 10-2b) Always make sure the E is capitalized in 1.23 E-6, otherwise it can be confused with "e" the base of the natural log system.c) Some scientific and engineering fields have special rules, such as electronics where scientific notation is usually in powers divisible by three, such as -3, 3, 6, 9, 12, etc. This is because electronic components are made using standard SI prefixes such as kilo, micro, nano, or pico.d) Usually, Scientific Notation is ignored if you want to keep numbers in common formats, such as 315 microseconds, instead of 3.15 x 10-4 seconds, but this is a matter of preference.Scientific notation is normally used for numbers that are either far to large or far to small to be written conveniently in decimal notation.A,BFor example the Earth's mass is approximately: 5,973,600,000,000,000,000,000,000.0 kgIn scientific notation this would be written as:5.9736 x 1024 kg.In normalised scientific notation numbers are written in the form:A,Ba x 10nWhere:a is a number between 1 and 10n is a positive or negative whole number.In engineering notation, the n value is commonly in the form of multiples of 3. In this way the number will always explicitly match the corresponding SI prefixes.BFor example a distance of 50,000 m would be written as:Scientific Notation: 5 x 104 mEngineering notation: 50 x 103 mIn this example 103 corresponds to the SI prefix "kilo"C as such the engineering notation could be directly described verbally as "fifty kilometres" whereas scientific notation yields the much more unwieldy "five times ten to the power four metres" which is much less intuitively easy to understand, even though it is exactly the same distance.Guidance on converting to and from scientific notation is given in the related links. Specifically References A and B.References:A Scientific notation - Engineering Maths Help from the 'mathcentre' Academic Website.B Scientific notation: Wikipedia Entry.C List of SI prefixes: Wikipedia Entry.Please see related links.

Related Questions

What are the methods of minimizing the effects of errors in measurement?

Always repeat the measurement for reliability . Measurement should always be seen up front and not sideways. Use a new scale for better readings.


How do you eliminate the collimation errors in traversing?

To eliminate collimation errors in traversing, you can regularly calibrate and adjust your equipment to ensure it is properly aligned. Additionally, you can use methods such as resection or traverse closures to detect and correct any errors in measurement. Proper training and experience in using surveying instruments can also help minimize collimation errors.


Which error occur in research and how it arises?

Errors in research can occur due to various factors, including human mistakes, methodological flaws, and biases. Common types of errors include sampling errors, measurement errors, and interpretation errors, which can arise from inadequate sample sizes, faulty data collection methods, or subjective bias in data analysis. These errors can lead to inaccurate conclusions and affect the validity and reliability of research findings. Careful planning, rigorous methodology, and peer review can help minimize these errors.


If you measure density several times would you expect the average of your density measurements to be closer to the actual density than a single measurement?

Yes, by taking multiple measurements and calculating the average, you can reduce the impact of random errors and get closer to the actual density. This is because averaging multiple measurements helps to minimize the effects of outliers or individual errors in any single measurement.


What are the differences between direct and indirect measurement methods, and how do they impact the accuracy of the results obtained?

Direct measurement methods involve obtaining data through direct observation or physical measurement, while indirect measurement methods involve using other data or calculations to estimate the desired quantity. Direct methods are typically more accurate as they involve measuring the actual quantity of interest, while indirect methods may introduce errors due to assumptions or estimations. The choice of method can impact the accuracy of results obtained, with direct methods generally providing more precise and reliable measurements.


What procedures are taken to avoid measurement errors using instruments?

To avoid measurement errors when using instruments, calibration is essential, ensuring that the instrument provides accurate readings against known standards. Regular maintenance and proper handling are also crucial to prevent damage or wear that can affect precision. Additionally, using the correct measurement technique and ensuring environmental conditions (like temperature and humidity) are controlled can further minimize errors. Finally, multiple measurements can be taken and averaged to enhance reliability and reduce random errors.


What is a probe compensation adjustment?

A probe compensation adjustment is a calibration process used in various measurement systems, particularly in electronic testing and analysis. It compensates for the effects of the probe's own capacitance and resistance, ensuring that the measurements taken are accurate and reflect the true characteristics of the device under test. This adjustment helps minimize measurement errors caused by the probe's influence on the signal being measured, thus enhancing the reliability of the results.


Is transmittance a more accurate measurement?

Transmittance is a measurement of the amount of light that is able to pass through a material, and it can be used to accurately determine the amount of light that is transmitted. However, the accuracy of the measurement depends on factors such as the quality of the equipment used and the conditions under which the measurement is taken. Therefore, while transmittance can provide an accurate measurement, it is important to ensure that proper procedures and equipment are used to minimize errors.


Which demensioning method permits the accumulation of errors and should be avoided?

The dimensioning method that permits the accumulation of errors and should be avoided is the "chain dimensioning" method. In this approach, dimensions are measured from a common baseline, leading to potential cumulative errors as each measurement relies on the previous one. Any inaccuracies in earlier dimensions can propagate through the entire assembly, resulting in significant discrepancies. Instead, "baseline dimensioning" or "coordinate dimensioning" methods, which minimize error propagation, are preferred for accuracy.


What reduces the effects of chance errors?

Increasing sample size, using randomization techniques, and conducting statistical analysis can help reduce the effects of chance errors in research studies. These methods can help ensure that the results obtained are more reliable and less influenced by random variability.


Sources of errors encountered in measurement?

sources of errors encountered in measurment


Why is it best to take the average of several measurements rather than just using one measurement?

Taking the average of several measurements helps to minimize the impact of random errors and fluctuations that can occur in a single measurement. This approach increases the reliability and accuracy of the result, as it accounts for variations and anomalies, leading to a more representative value. Additionally, averaging can help identify systematic errors, providing insights into potential biases in the measurement process. Overall, it enhances the precision of the findings.