![]() Let X 1,…, X n be a random sample from a distribution F θ that is specified up to a vector of unknown parameters θ. Ross, in Introduction to Probability and Statistics for Engineers and Scientists (Fourth Edition), 2009 7.1 INTRODUCTION Interested readers should refer to D’Agostlno and Stephens, Chapters 4 and 8. There are many other tests in the literature for testing the simple hypothesis ( 7.1). The AD test, on the other hand, does not suffer from this problem. ![]() However, it has to be restricted to a range 0 < a ≤ F ^ ( x ) ≤ b < 1. The advantage of the S-test is that it yields a confidence band that can be displayed graphically. ![]() Overall, the K-test is dominated by the other tests discussed in this section and is not recommended. Similarly, the AD test, which gives greater weight to the tails, is better than the CM test in this regard. We have already seen this in the context of confidence bands where the S-band did better than the K-band. As noted already, most of the information for discriminating between parametric models is in the tails, so procedures that give more weight to the tails will perform better. We turn now to a comparison of the performances of the various tests. This is consistent with our conclusion in the last section based on the 95% S-band. 01 level, and so this null hypothesis must be rejected. All of these test statistics are significant at the. Suppose we want to formally test the hypothesis that the distribution F ( x ) of the mechanical device failure data is lognormal with µ = 1.0 and σ = 1.0. In other words, the classical approach to testing H 0 is to fix a significance level α and then require that the test have the property that the probability of a type I error occurring can never be greater than α. The value α, called the level of significance of the test, is usually set in advance, with commonly chosen values being α =. The classical way of accomplishing this is to specify a value α and then require the test to have the property that whenever H 0 is true its probability of being rejected is never greater than α. Hence, with this objective it seems reasonable that H 0 should only be rejected if the resultant data are very unlikely when H 0 is true. Now, as was previously mentioned, the objective of a statistical test of H 0 is not to explicitly determine whether or not H 0 is true but rather to determine if its validity is consistent with the resultant data. The second, called a type II error, results if the test calls for accepting H 0 when it is false. The first of these, called a type I error, is said to result if the test incorrectly calls for rejecting H 0 when it is indeed correct. It is important to note when developing a procedure for testing a given null hypothesis H 0 that, in any test, two different types of errors can result. Thus, this test calls for rejection of the null hypothesis that θ = 1 when the sample average differs from 1 by more than 1.96 divided by the square root of the sample size. There is one table for each probability (tail area), and the values in the table correspond to F values for numerator degrees of freedom ν 1 indicated by column headings, and denominator degrees of freedom ν 2 as row headings. The choice of which variance estimate to place in the numerator is somewhat arbitrary hence the table of probabilities of the F distribution always gives the right tail value.Īppendix Table A.4 gives values of the F distribution for selected degrees of freedom combinations for right tail areas of 0.1, 0.05, 0.025, and 0.01. Scientific calculators, spreadsheet applications, and statistical software have more powerful tools for calculating probabilities from this distribution. Fortunately, for most practical problems only a relatively few probability values are needed. ![]() 2.Ī different table is needed for each combination of degrees of freedom. The F distribution is defined only for nonnegative values. However, some of the characteristics of the F distribution are of interest: 1. The equation describing the distribution of the F statistic is also quite complex and is of little use to us in this text. Also, if both populations have equal variance, that is, σ 1 2 = σ 2 2, the F statistic is simply the ratio S 1 2 ∕ S 2 2. If the variances are estimated in the usual manner, the degrees of freedom are ( n 1 − 1 ) and ( n 2 − 1 ), respectively. The distribution is denoted by F ( ν 1, ν 2 ). ![]() 4.The F distribution has two parameters, ν 1 and ν 2. c In terms of this problem, what is a type II error? d Find β when p =. a In terms of this problem, what is a type I error? b Find α. a Find the rejection region of the form is used. This problem has been solved: Solutions for Chapter 10 Problem 3E: Refer to Exercise 10.2. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |