Longitudinal data are data about a relatively small number of individuals collected over a period of time.
Cross-sectional data are data about a large number of individuals collected at one point in time.
Data which are measured by sorting the values into various categories are called categorical data.
Any data that can be ordered or ranked is called ordinal data.
Data which consist of measurements on an interval scale are called interval data or interval scale data.
A prospective study is an epidemiological study in which a potential cause of a disease is investigated by finding two groups of people, one of which is exposed to the potential cause and the other of which is not. The two groups are then followed up for some time to see if one group suffers more from the disease than the other.
A retrospective study is an epidemiological study in which potential causes of a disease are investigated by finding two groups of people, one suffering from the disease and the other not. The history of the people in the groups is then investigated to identify differences in their exposure to different potential causes.
General form of a linear programming model
optimize z = c1x1 + c2x2 + c3x3
subject to a11x1 + a12x2 + a13x3 <= / = / >= b1
a21x1 + a22x2 + a23x3 <= / = / >= b2
a31x1 + a32x2 + a33x3 <= / = / >= b3
x1, x2, x3 >= 0
cj, aij and bi are the parameters of the model
j = 1, 2, 3
i = 1, 2, 3
Arrangements
The number of ways of arranging n unlike objects in a line = n! e.g. ABC, total 3!
The number of ways of arranging in a line n objects of which p of one type are alike, q of another type are alike, r of a third type are alike = n! / (p!q!r!)
The number of ways of arranging n unlike objects in a ring when clockwise and anticlockwise arrangements are different = (n-1)!
The number of ways of arranging n unlike objects in a ring when clockwise and anticlockwise arrangements are the same = (n-1)! / 2
Permutations
The number of permutations of r objects taken from n unlike objects nPr = n! / (n - r)!
The order is important. e.g. ABC is a different permutation from ACB
Combinations
The number of combinations of r objects taken from n unlike objects nCr = n! / (r!(n - r)!)
The order is not important. e.g. ABC, ACB, BCA, BAC, CAB, CBA
Probability of selection(samples of size n from a population of size N) = 1 / NCn
every sample has the same probability of selection
e.g. N = 2, n = 1, Prob. = 1 / 2C1 = 0.5
e.g. N = 100, n = 10, Prob. = 1 / 100C10
Events
P(A or B) = P(A) + P(B) - P(A and B)
Mutually exclusive: two events cannot occur at the same time.
e.g. draw one card, P(king or queen) = P(king) + P(queen)
Not mutually exclusive
e.g. draw one card, P(ace or heart) = P(ace) + P(heart) - P(ace of hearts)
Exhaustive: P(A or B) = 1
Conditional probability: P(A | B) = P(A and B) / P(B) e.g. P(picture card | heart) e.g. P(prime | odd)
Independent events: P(A and B) = P(A)P(B) e.g. a die is thrown and a coin is tossed e.g. a 4 is obtained on the first throw of a die and an odd number is obtained on the second throw
Posterior Probabilities
Prior Prob. Conditional Prob. = Joint Prob.
Joint Prob. of A / sum of all Joint Prob. = Posterior Prob. of A
Joint Prob. of B / sum of all Joint Prob. = Posterior Prob. of B
Joint Prob. of C / sum of all Joint Prob. = Posterior Prob. of C
Sum of Posterior Prob. = 1
Sum of Joint Prob. of A, B and C = Marginal Prob.
Bayes' formula
P(A | B) = P(B | A) P(A) / P(B)
Bayes' Theorem
e.g. A1 and A2 = S, mutually exclusive and exhaustive events
Axiom is a self-evident and generally accepted principle.
Definition of a term is a statement giving the strict and precise meaning of the term.
Postulate is axiom in geometry
Lemma is a less important theorem used in the proof of another theorem.
Theorem is a general conclusion proved logically upon the basis of certain given assumptions.
Corollary is a theorem that follows from the proof of some other theorems that no proof is necessary.
The central limit theorem
If n independent random observations are taken from a population with mean mu and finite variance sigma^2, then for large n the distribution of their mean mu-hat is approximately normal with mean mu and variance (sigma^2) / n.
mu-hat has approximately the same distribution as N(mu, sigma^2 / n)
X1, X2, X3, ..., Xn are n independent random observations
Tn = X1 + X2 + X3 + ... + Xn is approximately normal with mean n(mu) and variance n(sigma^2)
Estimator
An estimator is unbiased if the expectation of its sampling distribution is equal to the parameter being estimated.
e.g. the estimator of mu in N(mu, sigma^2) is X-bar
E(X-bar) = mu
Var(X-bar) = sigma^2 / n
Interpreting p values
H0: theta = 0, H1: theta != 0
p <= 0.01: strong evidence against H0
0.01 < p <= 0.05: moderate evidence against H0
0.05 < p <= 0.1: weak evidence against H0
p > 0.1: little evidence against H0
range 0, 0.01, 0.05, 0.1
Type I error is made if we reject H0 when it is true
e.g. P(Type I error) = P(rejecting H0 | H0 is true)
Type II error is made if we do not reject H0 when it is false
e.g. P(Type II error) = P(not rejecting H0 | H1 is true)
Significance probability
The significance probability is the sum of the two shaded tail areas.
e.g. p = P(abs(T) >= 2.055) for T ~ t(19), p = 0.0539
e.g. 0.975-quantile of t(19) is 2.093, p = P(abs(T) >= 2.093) = 0.05
Significance level
For a 100(1 - a)% confidence level, the significance level is a. (or 100a%)
e.g. 99% confidence level, significance level is 0.01
e.g. 95% confidence level, significance level is 0.05
e.g. 90% confidence level, significance level is 0.10
Simple linear regression analysis
Yi = b1 + b2 Xi + ei
i = 1, 2, 3, ..., n
estimator Yi-hat = b1 + b2 Xi
The {ei} are uncorrelated random variables with mean 0 and constant variance sigma^2
Normality of the {ei} is required
F distribution
Let X1, X2, X3, ..., Xm be a random sample of size m from a normal population with mean mux and variance (sigmax)^2
Let Y1, Y2, Y3, ..., Yn be a random sample of size n from a normal population with mean muy and variance (sigmay)^2
F = (sigma(Xi - X-bar)^2 / m - 1) / (sigma(Yi - Y-bar)^2 / n - 1)
F distribution with m - 1 d.f. and n - 1 d.f.
If the two population variances are equal, the F ratio should be about 1
If the two population variances are different, the F ratio should be different from 1
OSI reference model
Layer 1: Physical Layer
Layer 2: Data Link Layer (Media Access Control MAC, Logical Link Control LLC)
Layer 3: Network Layer
Layer 4: Transport Layer
Layer 5: Session Layer
Layer 6: Presentation Layer
Layer 7: Application Layer
Software development process: The waterfall model
Requirements analysis and definition: The system's services are established by consultation with system users.
System and software design: The systems design process divides the requirements to either hardware or software systems. Software design involves identifying and describing the fundamental software system abstractions and their relationships.
Implementation and unit testing: The software design is realised as a set of programs. Unit testing involves verifying that each unit meets its specification.
Integration and system testing: Programs are integrated and tested as a complete system to ensure that the software requirements have been met.
Operation and maintenance: The system is installed and put into practical use. Maintenance involves correcting errors, improving the implementation of system units and enhancing the system's services as new requirements are discovered.
ITSM: ITIL service lifecycle stages
Service strategy: collaboration between business strategists and IT to develop IT service strategies that support the business strategy
Service design: designing the overarching IT architecture and each IT service to meet customers' business objectives by being both fit for purpose and fit for use
Service transition: managing and controlling changes into the live IT operational environment, including the development and transition of new or changed IT services
Service operation: delivering and supporting operational IT services in such a way that they meet business needs and expectations and deliver forecasted business benefits
Continual service improvement: learning from experience and adopting an approach which ensures continual improvement of IT services
Protons
Protons are made up of quarks and gluons.
真假與對錯
前提真,推論對,結論 真 對
前提真,推論錯,結論 真假不知 錯
前提假,推論對,結論 真假不知 對
前提假,推論錯,結論 真假不知 錯
Necessary condition
X is a necessary condition for Y. The absence of X guarantees the absence of Y. It is impossible to have Y without X. e.g. Having four sides (X) is necessary for being a square (Y).
To show that X is not a necessary condition for Y, we find a case where Y is present but X is not. e.g. Being rich (X) is not necessary for being happy (Y).
Sufficient condition
X is a sufficient condition for Y. The presence of X guarantees the presence of Y. It is impossible to have X without Y. e.g. Being a square (X) is sufficient for having four sides (Y). If X is present, then Y must also be present.
To show that X is not a sufficient condition for Y, we find a case where X is present but Y is not. e.g. Loyalty (X) is not sufficient for honesty (Y).
4 cases:
X is necessary but not sufficient for Y.
X is sufficient but not necessary for Y.
X is both necessary and sufficient for Y.
X is neither necessary nor sufficient for Y.
Necessary condition
X is a necessary condition for Y. The absence of X guarantees the absence of Y. It is impossible to have Y without X. e.g. Having four sides (X) is necessary for being a square (Y).
To show that X is not a necessary condition for Y, we find a case where Y is present but X is not. e.g. Being rich (X) is not necessary for being happy (Y).
Sufficient condition
X is a sufficient condition for Y. The presence of X guarantees the presence of Y. It is impossible to have X without Y. e.g. Being a square (X) is sufficient for having four sides (Y). If X is present, then Y must also be present.
To show that X is not a sufficient condition for Y, we find a case where X is present but Y is not. e.g. Loyalty (X) is not sufficient for honesty (Y).