Development of statistical ideas

  1. Clarify question about population
  2. Collect data from random sample
  3. Analyse data from sample
  4. Interpret results for population

Data

  1. Longitudinal data are data about a relatively small number of individuals collected over a period of time.
  2. Cross-sectional data are data about a large number of individuals collected at one point in time.
  3. Data which are measured by sorting the values into various categories are called categorical data.
  4. Any data that can be ordered or ranked is called ordinal data.
  5. Data which consist of measurements on an interval scale are called interval data or interval scale data.
  6. A prospective study is an epidemiological study in which a potential cause of a disease is investigated by finding two groups of people, one of which is exposed to the potential cause and the other of which is not. The two groups are then followed up for some time to see if one group suffers more from the disease than the other.
  7. A retrospective study is an epidemiological study in which potential causes of a disease are investigated by finding two groups of people, one suffering from the disease and the other not. The history of the people in the groups is then investigated to identify differences in their exposure to different potential causes.

General form of a linear programming model

  1. optimize z = c1x1 + c2x2 + c3x3
  2. subject to a11x1 + a12x2 + a13x3 <= / = / >= b1
  3. a21x1 + a22x2 + a23x3 <= / = / >= b2
  4. a31x1 + a32x2 + a33x3 <= / = / >= b3
  5. x1, x2, x3 >= 0
  6. cj, aij and bi are the parameters of the model
  7. j = 1, 2, 3
  8. i = 1, 2, 3

Arrangements

  1. The number of ways of arranging n unlike objects in a line = n! e.g. ABC, total 3!
  2. The number of ways of arranging in a line n objects of which p of one type are alike, q of another type are alike, r of a third type are alike = n! / (p!q!r!)
  3. The number of ways of arranging n unlike objects in a ring when clockwise and anticlockwise arrangements are different = (n-1)!
  4. The number of ways of arranging n unlike objects in a ring when clockwise and anticlockwise arrangements are the same = (n-1)! / 2

Permutations

  1. The number of permutations of r objects taken from n unlike objects nPr = n! / (n - r)!
  2. The order is important. e.g. ABC is a different permutation from ACB

Combinations

  1. The number of combinations of r objects taken from n unlike objects nCr = n! / (r!(n - r)!)
  2. The order is not important. e.g. ABC, ACB, BCA, BAC, CAB, CBA
  3. Probability of selection(samples of size n from a population of size N) = 1 / NCn
  4. every sample has the same probability of selection
  5. e.g. N = 2, n = 1, Prob. = 1 / 2C1 = 0.5
  6. e.g. N = 100, n = 10, Prob. = 1 / 100C10

Events

  1. P(A or B) = P(A) + P(B) - P(A and B)
  2. Mutually exclusive: two events cannot occur at the same time.
  3. e.g. draw one card, P(king or queen) = P(king) + P(queen)
  4. Not mutually exclusive
  5. e.g. draw one card, P(ace or heart) = P(ace) + P(heart) - P(ace of hearts)
  6. Exhaustive: P(A or B) = 1
  7. Conditional probability: P(A | B) = P(A and B) / P(B)
    e.g. P(picture card | heart)
    e.g. P(prime | odd)
  8. Independent events: P(A and B) = P(A)P(B)
    e.g. a die is thrown and a coin is tossed
    e.g. a 4 is obtained on the first throw of a die and an odd number is obtained on the second throw

Posterior Probabilities

  1. Prior Prob. Conditional Prob. = Joint Prob.
  2. Joint Prob. of A / sum of all Joint Prob. = Posterior Prob. of A
  3. Joint Prob. of B / sum of all Joint Prob. = Posterior Prob. of B
  4. Joint Prob. of C / sum of all Joint Prob. = Posterior Prob. of C
  5. Sum of Posterior Prob. = 1
  6. Sum of Joint Prob. of A, B and C = Marginal Prob.

Bayes' formula

  1. P(A | B) = P(B | A) P(A) / P(B)

Bayes' Theorem

  1. e.g. A1 and A2 = S, mutually exclusive and exhaustive events
  2. P(A1 | B) = P(B | A1) P(A1) / (P(B | A1) P(A1) + P(B | A2) P(A2))
  3. e.g. A1, A2 and A3 = S, mutually exclusive and exhaustive events
  4. P(A1 | B) = P(B | A1) P(A1) / (P(B | A1) P(A1) + P(B | A2) P(A2) + P(B | A3) P(A3))

Theorem of Total Probability

  1. E1, E2 and E3 are mutually exclusive and exhaustive events
  2. P(A) = P(A | E1) P(E1) + P(A | E2) P(E2) + P(A | E3) P(E3)
  3. E1 and E2 are mutually exclusive and exhaustive events
  4. P(A) = P(A | E1) P(E1) + P(A | E2) P(E2)
  5. E1 and not E1 are mutually exclusive and exhaustive events
  6. P(A) = P(A | E1) P(E1) + P(A | not E1) P(not E1)

Probability mass function

  1. range of X is {1, 2, 3, 4, 5, 6}
  2. Probability mass function of X is pX(x) = P(X = x)
    e.g. pX(x) = 1/6, x = 1, 2, 3, 4, 5, 6
e.g. range of X is {1, 2, 3}
x123
P(X = x)0.10.60.3

E(X) = sigma x P(X = x) = 1(0.1) + 2(0.6) + 3(0.3)

e.g. range of X is {1, 2, 3, 4, 5}
x12345
P(X = x)0.10.30.20.30.1

E(X^2) = sigma x^2 P(X = x) = 1^2 (0.1) + 2^2 (0.3) + 3^2 (0.2) + 4^2 (0.3) + 5^2 (0.1)
V(X) = E[(X - mu)^2] = E(X^2) - mu^2

Probability generating function

  1. Pi(s) = p(0) + p(1)s + p(2)s^2
  2. p.m.f. of X: x = 1, 2 and p(x) = 0.6, 0.4
  3. p.g.f. of X is Pi(s) = 0.6s + 0.4s^2
  4. p.m.f. of X: x = 0, 1, 2 and p(x) = 0.2, 0.6, 0.2
  5. p.g.f. of X is Pi(s) = 0.2 + 0.6s + 0.2s^2

Terminologies

  1. (Axioms, Definitions, Postulates) -> (Lemmas -> Theorems) -> (Theorems -> Corollaries)
  2. Axiom is a self-evident and generally accepted principle.
  3. Definition of a term is a statement giving the strict and precise meaning of the term.
  4. Postulate is axiom in geometry
  5. Lemma is a less important theorem used in the proof of another theorem.
  6. Theorem is a general conclusion proved logically upon the basis of certain given assumptions.
  7. Corollary is a theorem that follows from the proof of some other theorems that no proof is necessary.

The central limit theorem

  1. If n independent random observations are taken from a population with mean mu and finite variance sigma^2, then for large n the distribution of their mean mu-hat is approximately normal with mean mu and variance (sigma^2) / n.
  2. mu-hat has approximately the same distribution as N(mu, sigma^2 / n)
  3. X1, X2, X3, ..., Xn are n independent random observations
  4. Tn = X1 + X2 + X3 + ... + Xn is approximately normal with mean n(mu) and variance n(sigma^2)

Estimator

  1. An estimator is unbiased if the expectation of its sampling distribution is equal to the parameter being estimated.
  2. e.g. the estimator of mu in N(mu, sigma^2) is X-bar
  3. E(X-bar) = mu
  4. Var(X-bar) = sigma^2 / n

Interpreting p values

  1. H0: theta = 0, H1: theta != 0
  2. p <= 0.01: strong evidence against H0
  3. 0.01 < p <= 0.05: moderate evidence against H0
  4. 0.05 < p <= 0.1: weak evidence against H0
  5. p > 0.1: little evidence against H0
  6. range 0, 0.01, 0.05, 0.1
  7. Type I error is made if we reject H0 when it is true
  8. e.g. P(Type I error) = P(rejecting H0 | H0 is true)
  9. Type II error is made if we do not reject H0 when it is false
  10. e.g. P(Type II error) = P(not rejecting H0 | H1 is true)

Significance probability

  1. The significance probability is the sum of the two shaded tail areas.
  2. e.g. p = P(abs(T) >= 2.055) for T ~ t(19), p = 0.0539
  3. e.g. 0.975-quantile of t(19) is 2.093, p = P(abs(T) >= 2.093) = 0.05

Significance level

  1. For a 100(1 - a)% confidence level, the significance level is a. (or 100a%)
  2. e.g. 99% confidence level, significance level is 0.01
  3. e.g. 95% confidence level, significance level is 0.05
  4. e.g. 90% confidence level, significance level is 0.10

Simple linear regression analysis

  1. Yi = b1 + b2 Xi + ei
  2. i = 1, 2, 3, ..., n
  3. estimator Yi-hat = b1 + b2 Xi
  4. The {ei} are uncorrelated random variables with mean 0 and constant variance sigma^2
  5. Normality of the {ei} is required

F distribution

  1. Let X1, X2, X3, ..., Xm be a random sample of size m from a normal population with mean mux and variance (sigmax)^2
  2. Let Y1, Y2, Y3, ..., Yn be a random sample of size n from a normal population with mean muy and variance (sigmay)^2
  3. F = (sigma(Xi - X-bar)^2 / m - 1) / (sigma(Yi - Y-bar)^2 / n - 1)
  4. F distribution with m - 1 d.f. and n - 1 d.f.
  5. If the two population variances are equal, the F ratio should be about 1
  6. If the two population variances are different, the F ratio should be different from 1

OSI reference model

  1. Layer 1: Physical Layer
  2. Layer 2: Data Link Layer (Media Access Control MAC, Logical Link Control LLC)
  3. Layer 3: Network Layer
  4. Layer 4: Transport Layer
  5. Layer 5: Session Layer
  6. Layer 6: Presentation Layer
  7. Layer 7: Application Layer

Software development process: The waterfall model

  1. Requirements analysis and definition: The system's services are established by consultation with system users.
  2. System and software design: The systems design process divides the requirements to either hardware or software systems. Software design involves identifying and describing the fundamental software system abstractions and their relationships.
  3. Implementation and unit testing: The software design is realised as a set of programs. Unit testing involves verifying that each unit meets its specification.
  4. Integration and system testing: Programs are integrated and tested as a complete system to ensure that the software requirements have been met.
  5. Operation and maintenance: The system is installed and put into practical use. Maintenance involves correcting errors, improving the implementation of system units and enhancing the system's services as new requirements are discovered.

ITSM: ITIL service lifecycle stages

  1. Service strategy: collaboration between business strategists and IT to develop IT service strategies that support the business strategy
  2. Service design: designing the overarching IT architecture and each IT service to meet customers' business objectives by being both fit for purpose and fit for use
  3. Service transition: managing and controlling changes into the live IT operational environment, including the development and transition of new or changed IT services
  4. Service operation: delivering and supporting operational IT services in such a way that they meet business needs and expectations and deliver forecasted business benefits
  5. Continual service improvement: learning from experience and adopting an approach which ensures continual improvement of IT services

Protons

  1. Protons are made up of quarks and gluons.

真假與對錯

  1. 前提真,推論對,結論 真 對
  2. 前提真,推論錯,結論 真假不知 錯
  3. 前提假,推論對,結論 真假不知 對
  4. 前提假,推論錯,結論 真假不知 錯

Necessary condition

X is a necessary condition for Y. The absence of X guarantees the absence of Y. It is impossible to have Y without X. e.g. Having four sides (X) is necessary for being a square (Y).

To show that X is not a necessary condition for Y, we find a case where Y is present but X is not. e.g. Being rich (X) is not necessary for being happy (Y).

Sufficient condition

X is a sufficient condition for Y. The presence of X guarantees the presence of Y. It is impossible to have X without Y. e.g. Being a square (X) is sufficient for having four sides (Y). If X is present, then Y must also be present.

To show that X is not a sufficient condition for Y, we find a case where X is present but Y is not. e.g. Loyalty (X) is not sufficient for honesty (Y).

4 cases:

  1. X is necessary but not sufficient for Y.
  2. X is sufficient but not necessary for Y.
  3. X is both necessary and sufficient for Y.
  4. X is neither necessary nor sufficient for Y.