Written by students who passed Immediately available after payment Read online or as PDF Wrong document? Swap it for free 4.6 TrustPilot
logo-home
Other

Cheatsheet test 1 - Inferential Statistics ()

Rating
-
Sold
-
Pages
6
Uploaded on
23-04-2026
Written in
2024/2025

Cheatsheet test 1 - Inferential Statistics (), pre-master Psychology, universiteit Twente

Institution
Course

Content preview

Test of proportions: assess whether sample from a popu. represents the true population of the entire population. Chi-square
Hypotheses - 1sample t-test of differences: Notations
(Goodness of fit test): check if nominal characteristic from sample is different from population. t.test: compare mean of 1
group to population mean; diff in time. Independent sample t-test whether 2 groups differ from each other. ANOVA: is ANY of μ μ μ μ≠
H0: 2 - 1 = 0 (no change); HA: 2 - 1 Population proportion: π
the groups different from ANY of the other groups. (assuming equal variances). Goes beyond t-test by testing the difference 0 (change) Addition Sample proportion: p
between two or more means. Welch ANOVA: similar as ANOVA but assuming unequal variances. Levene’s test: *mainly for 2-sided hypothesis:
groups/categories *dich/nominal IVs *check error variances in the group are the same. Breusch Pagan *more general *not only H : is 100; H : is not 100
H0 β1= 0 Population mean: μOR
for groups *test heterodasc. Mann-Whitney Wilcoxon test: test effect of dich variables, assessing relationship/medians
0 μ Aμ HA β1≠ β0
0
between 2 groups (dummy & scale), T&C group, IV = dich. Multiple regression: relationship between continuous DV + multiple 1-sided hypothesis: Additional: Fraction:
Wilcoxon signed-rank test: 2 datasets of 1 group, not normal, ordinal DVs, test change between t1 & H0: μ≥ 100; HA: μ 100
H0 β2= 0
between 0-1 (*100  %)
t2. Kruskal Wallis test: more than 2 groups (else MWW), relationship between nominal (groups) & T value (standardised effect of b) Proportion: between 0-1
scale. Shapiro Wilk test: test normality, DV normally distributed. 1 sample t-test: big sample, normal S.E. (proportion
distributed, check whether there’s a difference in change  use lm() procedure Cook’s: influential
b− β b
SD of the sampling distribution =
Integer: -1, 0, 1 etc.
Dummy: 0 and 1
cases s.e. s.e .
Standardised t-value Chi-square statistic SD of sampling distribution Effect size: horizontal % diff between 2 horizontal values
x−μ Standard error (mean) (calculate % first  from total in column)
(S.E.) of ρ when ρ = 0 is SD of the sampling S  t-
If you usedistribution Load data:
s.e. (reject H0 >3.84)
distribution  n-1 dataset <- read.csv("data_scot.csv", sep = ",")

Unit 550 – Multiple regression addition: the effect of 2 variables (DV = ratio) Unit 553 – Multiple regression interaction: the combined effect of a ratio variable & +/1 dummie
Both education and family independently effect EQ. // Addition  adding different types of Interaction/moderation = the relationship between 2 varaibles
independent variables (family type 0 a lower starting point) is affect by a third variable. The intercepts are different but more
The effect of education for both groups is the same important, the effects will be different.
B2 (open) is difference with B1 (closed)
Family type is either 0 (closed) or 1 (open) so we can simplify to: b0 = intercept; b1 = slope referene category; b2 = intercept difference
Y= β 0+ β 1 education(for closed ) b3 = slope difference
Effect for group = b1x of b2d+b3xd
Y= ( β 0+ β 2 ) + β 1education (for open) Difference in effect = b3xd
And specific exaptation(s): Combining the effect of education and age on Ageism in 1 linear equation: addition
In multiple regression there are general expectations: H0 : β …= 0 (variable has no effect) Y^ = β0 + β 1∗Age❑ + β 2 Education❑
H0 : β 2=β 1 = 0 (variables have no effect) HA : β ≠ 0 (variable has an Finding the OLS lines and R-squared (descriptive statistics) AND Testing / inference: regression

: at least one β is not zero
HA coefficients (t-values) & overall model (F-test). Are coefficients ( β ' s ¿ significant?
= R squared and F-test
Income = B2 + B3private – B4Unemp + B5Education – B6Private*Education – B7Unemp*Education
A deterministic relationship is not expected and thus we have to check residuals. Should be (1) R studio – interaction/moderation
normal distributed and (2) have the same variance  residuals estimate the true errors in the (1) Create dummies: data$x_dummy_1 <- ifelse(data$x == “public”, 1, 0) select public as ref cat.
population (2) Use lm() to estimate model parameters: data_sample %>% lm(income ~ private + unemp +
If residuals are problematic: (1) Non-linearity, (3) Other factors play a role. Technically: standard educ + educ*private + educ*unemp, data = .) %>% summary  t-values >2 and p-value >0.05?
errors we use for inference will be wrong. How good are our estimates about b-coefficients? (3) Add residuals: data_clean$residuals <- model$residuals
(4) Add predictions: data_clean$pred <- model$fitted.values
Do not check residuals with Y (original dependent variable) -> it should be about the predicted Y OR in this way: data <- data %>% add_predictions(model1) %>% add_residuals(model1)
and residuals. First check the overall normality and second check the residuals with X (IV) and Check residuals: data_clean %>% ggplot(aes(x = pred, y = residuals)) + geom_point()
predicted variables. -> scatterplot (residuals should be in a box with 0 in the middle). Low vs high (1) level of education: model_low_educ <- data_clean %>% filter(education == 0)%>%

R studio – relationship between 2 (or more) IVs and ratio DV using lm procedure lm(support ~ campaign, data = .)
Libraries – tidyverse – broom – modelr – car – lmtest – janitor – ggExtra Visualize:data_clean%>%filter(education==0)%>%ggplot(aes(x=campaign,y=support)
lm test: model1 <- data %>% lm(y ~ x1 + x2, data =.) +geom_jitter(width =0.2)+geom_smooth(method=“lm”)
residual analysis: data2 <- data %>% add_predictions(model1) %>% add_residuals(model1) Create interaction term: data_clean<-data_clean%>%mutate(interaction=campaign*education)
display residuals: data2 %>% ggplot(aes(x = resid)) + geom_point()
Unit 560 – Non-normality of resid & omitted variables (assumpt. of normality & equal variance)
Unit 554 – Multiple regression and non-linearity
Studying improves your grade (non-linearity) – effect decreases when you study more ε¿
Errors (
i are in the population  Residuals are in the sample ( i e¿
Grade= β 0+ ( β 1 )∗hours of studying  If a model is good the errors will be random. Deviations are problematic: (1) the mean of residuals
(zero) is affected by outliers/skew, (2) This mean is associated with b (the estimate), (3) Less
β 1=β 2−β 3∗hours of studying confident in the S.E. based on these means  execute the steps below
Grade= β 0+ ( β 2−β 3 HoursStud ) HoursStud  1.visual histogram inspection
2 2.QQ plots (range in variable): shows the relationship between what you expect to find (x-axis)
β 0+ β 2∗HoursStud−β 3∗HoursStud when it is a normal distribution and what you observe on the y-axis. Can answer if the distribution
is normal (straight line). If the data deviates from normality, then the line will display strong
Non-linearity: (1) inspect via scatterplot, (2) residuals don’t have equal variances  solution ^2 curvature.  (formality test for testing normality Shapiro Wilk test)
For example (X = 75) = 20 + 0.6 * X + 0.002 * X^2  20 + 0.6*75 + 0.002*(75*75) = 76.25 3.Shapiro Wilk test (GoOfTe): chance of finding a W in a sample, smaller than critical value. Tests
hypothesis that the distribution of the data deviates from a comparable normal distribution. If
R studio – non-linearity (p<0.05)  reject null-hypothesis  data is not normally distributed. When sample size increases,
Scatterplot: data_sample %>% ggplot(aes(x = size, y = conflicts)) + geom_point() + SW will lead to greater probability of rejecting the null-hypothesis. H0= normal distribution.
geom_smooth(method = “lm”, se = FALSE)
Estimate model: model1 %>% data_sample %>% lm(conflicts ~ size, data = .) R studio: checking and fixing normality
data_sample <- data_sample %>% add_residuals(model1) OR data_sample$resid2 <- 1. Create & store residuals: first lm estimate model, dan add residuals 2. histogram (x = residuals)
model$residuals 3. QQ plot: data %>% ggplot(aes(sample = residuals)) + geom_qq() + geom_qq_line()
Detecting non-lin: data_sample %>% ggplot(aes(x = size, y = resid2)) + geom_point() + geom_sm.. 4. Shapiro Wilk test (test normality): shapiro.test(data$residuals)  Ho: normal distribution
Unit 561 – Heteroscedasticity – non-equal variances and interaction effects 5. Reparing non-normality: data <- data %>% mutate(trans_y = log(y))
Homoscedasticity (residual variance) = homogeneity (of variances) = equal variances Unit 563 – Outliers, Influential cases, and Multicollinearity
Heteroscedasticity= heterogeneity (of variances) = unequal variances  is bad because: (1) We Residual: extent to which a datapoint is away from the estimated line.
only have one S.E. for the slope. (2) S.E. is used to evaluate the ‘quality’ of the slope/find p-values. Leverage: outlier on the x (IV) , say beyond +/- 2 S.E. How much the observation’s value on the
It occurs because of (1) Measurement error in Y (which is related to x) and (2) Interaction effects. predictor variable differs from the mean of the predictor variable. Look at whether they’re
different from the rest.
 detect: by making an X (predicted) and Y (residuals) graph. Save pred and resid and create
Influence: the extent to which the slope of the line is affected by the data point. Determined by
Levene’s test (mainly for groups, dich/nominal independent variables – test homogeneity) residuals and leverage. When both are high then you have influence.
*Predicting the deviations (residuals) from the means, using the group variable. You take the
residuals as you DV and the grouping variables as your IV and test if there is a significant Cooks’s distance: if the predictions differ greatly when the observation is not included, the
association. *F-distributed. *Can be used to test whether error variances in two groups are the observation is influential. Value of Cook’s D >1.0 has too much influence. Solution: coding errors?
same *H0= equal variances. Low p-value means that deviations are bigger in one group than in Special cases? Consider removing them from the analysis. Large residuals with small cook’s
another and the groups have different variances reject H0. * if the sample is big, this test will distance do not affect the model but are interesting too. Always worth checking residuals and
always be significant + equal variances won’t affect the S.E. estimates in larger samples.
Collinearity: including income twice (x1 and x2) explaining happiness (income in dollars and €).
Breusch Pagan test (more general, not only used for groups – heteroscedasti of overall model
*Studying whether residuals are associated with one or more variables  H0=homogeneity. *The There is an infinite number of planes (all planes around the line) are able to explain y when the
test assumes normality of errors (if not  transform Y variable). It is used to formally test for same independent variables are included in the model.
heteroscedasticity of the overall model. If the error variance is not the same across the model, the Multicollinearity: including one variable and a combination of variables measuring the same thing
model estimates (the b-coefficients) are not necessarily wrong but we’re less sure about the (e.g. social class & education & income, while social class is a combination of both)
quality of these estimates. p-vale of 0 we have to reject the null-hypothesis of there being equal Consequences: (1) problem with statistical inference. (2)in case of (almost perfect/strong) (multi)
error variance across the model. (p>0.05) don’t reject null-hypothesis  equal variances collinearity, the t-values cannot be trusted, thus the b-values are meaningless. (3) you cannot tell
which of the IV is really related to the DV.
R studio Solve: reconceptualize theory, include an index combining correlated variables.
Create differences t2_t1, create dummies, estimate lm with read_diff, add residuals 
Levene’s (groups only): leveneTest(resid ~ as.factor(DV), data) OR instead of resid use read_diff Detecting multicollinearity: 1. Look at bivariate correlation between X-es (only for simple
levenetest <- leveneTest(read_diff ~ treatment, data) collinearity) or 2. Look at the VIF (Variance Inflation factor)= multicollinearity. The VIF is based on
as.factor since it’s about the R2 of an x with the other X’s
Breusch (for lm): bptest <- bptest(model1)  bptest
groups (nominal)
If error variance = not similar across the model estimates (b-coefficents), not necessarily wrong but less certain about the qualities of the model
Example about the experiment with reading abilities -> 1. Compute difference between t2 and t1 -> 2. Construct dummies (take the control group as reference category) -> 3. Estimate the linear model

, Unit 563 – Outlier, Influential cases, and Multicollinearity – R studio Unit 510 – A non-parametric alternative for means testing: Wilcoxon signed-rank test (instead of
Detecting cases with Leverage: model <- data %>% lm(y ~ x1 + x2 + x3, data =.) the one sample t-test)
 data$leverage <- hatvalues(model)  large residuals + large leverage = INFLUENCE The parametric one-sample t-test analyzes if the average differences of 2 repeated measure = 0.
Cook’s distance: first find + store residuals model1 <- data %>% lm(y ~ x, data=.)
When to use non-parametric tests? (1) When variables are nominal/ordinal (they do not have
data$cd <- cooks.distance(model1)  plot(model1)  4th (clicking) plot will show the 0.5 (shows means). (2) Small samples. (3) Variables are not normally distributed. (4) Non-linear relationships.
the influence)  if a case is above 0.5 or 1 mention that it’s influential Non-parametric tests are often just parametric tests (lm) using (signed) ranks.
Detecting multicollinearity: model <- lm(y ~ x1 + x2 + x3, data=.)  vif(model) Wilcoxon signed rank test  ordinal variable | Goodness of fit test  nominal variable
VIF = 1 / (1-R2) It runs between 1 and infinity. Chi-square statistic  variable dummy + nominal | Mann-Whitney-Wilcoxon  dummy + ordinal
R2=0= no correlation between X’es -> means that VIF is 1 -> good because X’es are not related.
Values above 3 or 4 are considered problematic. Meaning that the R2 is bigger than 66-75%. High Why using ranks? 1. Outliers are less harmful. 2. High ranks only mean bigger than the preceding
VIF -> leave out a variable or combine into a new one. With mulit-coll > R^2 should be low one. 3. The test is less powerful, because we only have ‘ranks’.

If you have 2 groups with the same number of summed ranks (don’t assume normal data
distribution), then they aren’t different. Signed ranks take the absolute value of a number (ignoring
minus) and transform these values to ranks -2.8 (value) (rank)  -3 (signed-rank). High
(negative/positive) now mean more change than the preceding one’. The sign indicates direction of
change. Relevant for change or difference variables. Same values get the same rank = ties. Total
Unit 545 – Mann-Whitney-Wilcoxon test (testing the effect of a dichotomous variables instead of
numbers of ranks = (1 + n) * (n/2) | Half total of ranks = (1+n)*(n/4) = mean of sam dis.
the independent (two) sample t-test/Welch t-test/dummy in a linear model)
Mann-Whitney-Wilcoxon test (MWW) Wilcoxon signed rank test: (1) Ordinal Dependent variables, (2) You want to test for a change
*Treatment and control group *Assessing the relationship between 2 groups (dummy between t1 and t2 (but the mean is not meaningful since the variable is ordinal). (3) Small sample. (4)
and scale variables) Data are not normally distributed. We focus on the median value with the test, the value in the
*If the summed ranks are the same in both groups, there probably is no difference. *The middle. Skew: with 1 short and 1 long tail
test statistic follows a normal distribution. *Independent variable should be nominal or
ordinal values with two expressions (male/female, control vs. experiment group) and the Wilcon null hypothesis= the expected sum of positive signed ranks will be about the same as the
dependent variable should be an ordinal variable (Likert scales for example). sum of negative signed ranks (half the total sum of signed ranks). If we reject H0, there is a
difference.
Logistic regression: when dependent dummy variable *Shape is approximately normal. * Mean of sampling distribution is half the total of summed ranks=
Look at the histogram of the DV to check for skewness. (n*n+1/4). The S.E. of the distribution  do the summed positive ranks (W+) differ a lot from half
the number of summed ranks  if yes  we reject H0 (there is a difference).
*The observed test statistic(=3) should be smaller than the critical value (=2) to reject the H0= no
difference. In this case there is insufficient evidence to reject H0. If (p>0.05) DON’T reject H0= R studio
Under the null hypothesis the groups come from the same distribution (come from the same Check distributions: (1) visual inspection, (2) skew & kurtosis (description), (3) Shapiro-Wilk test
population, there is no difference between the groups) and thus the distributions are the same. Skewness: library(moments)  skewness(data$t1) & skewness(data$t2)
The only reason for finding differences between the groups is then because by chance.  symmetric: -0.5 to 0.5, moderate: 1 and 0.5 OR -1 and -0.5, high: -1 or greater than > 1
Kurtosis = with fat tails -are more likely to extend from the mean, there are more extreme values
(outliers) in the data. High Kurtosis means that there are heavy tails or outliers. Kurtosis is the
R studio - (asthma by group- make sure you have your dependent variable first & make dummy!) peakedness  kurtosis(data$t1) & kurtosis(data$t2)  bigger than 3 indicates skewness
MWW: wilcox.test(asthma ~ group, data=data ,exact=FALSE) Shapiro-Wilk test (formal test of non-normality: shapiro.test(data$t1) & shapiro.test(data$t2)
Using parametric test (assuming equal variances): dummy_test<-lm(y~x1,data)  dummy_test  if significant: data do NOT come from a normal distribution.

Unit 545 – Kruskal-Wallis test (testing the effect of a nominal variable) Wilcoxon signed rank test: Calculating the difference: data$t2_t1<-data$t2-data$t1
Kruskal-Wallis test: *If you have more than 2 groups (otherwise use Mann-Whitney for  wilcox.test(data$t2_t1,alternative=“two.sided”) of wilcox.test(score ~ group, data)
comparisons). *testing relationship between nominal variable (groups) and scale variable  focus
on differences in groups.*Is there a difference between several independent groups? * When you V is the number of positive ranks. If n=30, half of the summed ranks is ((1+n)*(n/2)/2=232.5. When
have a nominal variable and one ranked variable. *tests whether the medians are the same in all V=293.4 than you have more positive signed ranks. Difference is not different from zero and thus we
cannot reject H0. V should be much larger before null-hypothesis is rejected.
groups – is there a difference in the rank totals? *similar to the F-test in ANOVA. *Tests whether at
least one group is different from another. *When calculating this statistic, while there is no Oplossingen:
difference between groups (H0), some H will be smaller and some will be bigger but the values Hoge VIF: delete one of the variables that id highly correlated with anoter one; mix those 2 variables
follow a chi-square distribution. Influential cases: rreplacintg outliers with more reasonable values( mean,meadian); transform data
Linearity: adding a quadratic term or logarithmic function; reconceptualize the independent variable
How to use Kruskal: Non-normality: always think theoretically first, add variable; thansform Y-variable (log/square)
1.rank the data, 2. Sum the ranks per group, 3. Calculate the statistic, 4. Find critical value (Chi- homogeneity: improve measurement; consider interaction variable; if pure -> transform y
square), 5. If H is bigger than critical value then reject H0= all medians are equal

R studio:
Observe & test normality: histogram and shapiro.test(data$X)
Kruskal test: data %>% kruskal.test(Y ~ X, data = .)
Chi = H | DF = groups – 1 | p-value (data are form a Unit 470 – Describing/assessing relationship between scale and dummy variable
Population in which median are the same) Solving the problem of a dichotomous variable Fb not rejecting Ho: cannot ignore the
1. thinking in probabilities/proportions possibility of there beining no relationship
Find critical value: DF= number of groups minus 1 2. Taken the odds of probabilities (p/(1-p))
H value: pchisq(0.05, df=2, lower.tail=FALSE)  based on square of summed ranks in each group) 3. Taking natural logarithm of odds
P-value: pchisq(Hvalue,df=2, lower.tail=FALSE) 4. interpreting estimated outcome back to prop Confusion matrix

When critical value > observed k-value  do NOT reject H0 = no significant difference from a Prevalence: (FN+CP)/all = %yes over no
group as compares to any of the others. P-value = whether the data comes from a population in Accuracy: (CP+CN)/all = %corret Specificity: CN/(CN+FP)= recall
which the medians are the same. (p>0.05) CANNOT rule out the possibility that these data come Error rate: (FP+FN)/all = %false Sensitivty: CP/(FN+CP)
from a population where the medians are the same. Accuracy = 1- error rate Precision: CP/(CP+FP)
Reducing the error rate (may) decrease sensitivity or specificity but increases the precision.
IF sample size is big, variables are normally distributed, variable can be treated as interval/ratio 
use (Welch) ANOVA and associated F-test. R-studio
Relationship scale/dummy, logistic regression: model_lr <- data %>% glm(formula = y ~ x, family =
Always check: random sample, no outliers, big sample, unimodal distributions. General "binomial", data = .)
assumptions linear model are: (1) Linearity (additivity), (2) Independence, (3) Normality and Visualize logistic regression (LG): Data %>% Ggplot(aes(x= x, y =y)) +Geom_point() +
(4) Homogeneity of variances. Geomsmooth(method = ‘lm’, Se = FALSE, Color = ‘red’) + Geom_smooth (method = ‘glm’,
One-sample t-test /LM with repeated measure (comparing the mean of one sample to a H0, Method.args = list(family = “binominal”) Se = F Color = ‘green’)  glm part: line displays LG
often the population mean) | (1) Differences in time (2) Studying couples. one sample t-test is calculate predicted probabilities:
used to test the difference between a sample mean and a population mean. – non-parametric is b0 <- -2 # this is the estimate of the constant in the output/ b1 <- +1 # the estimate of the slope in
Wilcoxon signed rank test. the output/ x <- 40 # the *x-value* for the unit you want to calculate the predicted probabilities for/
Two sample t-test / LM with dummies + Welch t-test / independent sample t-test = (comparing est <- b0 + b1*x # using the linear equation, this calculate the change on x-value
the estimated means of two samples)| (1) Comparing two different groups. A two sample t-test is prob <- 1 / (1 + exp(-est)) # the formula transforming the estimates back to probabilities
used to test the difference between two sample means. - non-parametric is Wilcoxon signed rank prob # this will now give the estimated probability
sum test = Mann-Whitney test. probailites for people with score 0, 18, 72 on x:
x <- c(0, 18, 72)  new_data <- as.data.frame(x)  predict(model_lr, new_data, type=”response”)
Studying two variables: difference between groups
Normally: dummy+ linear model or (Welch) t-test. (for dummy + ratio variable) Deterministic: Y is fully explained by one variable (no error in the equation). Probabilistic: Y is not
But: What if the variable dummy+ nominal: chi square statistic (non-parametric). fully explained by one variable (finding a line with residuals).
What if the variable is dummy+ ordinal: Mann-Whitney- Wilcoxon test Bivariate: fitting a line (not fixed when one data point only) , Trivariate: fitting a plane (not fixed
when collinearity).
Studying two variables relationship between two scale (interval or ratio) variables
Normally: Pearsons or linear model = parametric Studying one variable (change over time in a group, example satisfaction with a product)
But: what if the association is only monotonically increasing (and simple quadratic terms/log do Dichotomy= proportion = parametric | Nominal= goodness of fit test = non-parametric
not work well)-> Spearman or Kendall’s tau = non-parametric Interval or ratio = mean = parametric | Ordinal = Wilcoxon singed rank test = non-parametric
Linear model: Null-hypothesis:

*T value > 2 : it is very unlikely these data come from a population where β is 0 (different groups) -> we reject the null-hypothesis
* T value < 2 : it is very likely these data come from a population where there is no association between the variables (similar groups, β=0 ¿
p<0.05 = significance | p>0.05 = not significant & don’t reject null-hypothesis  non-sig = unlikely data come from sample in which there’s no linear association.
*p-value (significance): p <0.05 chance these data come from a population where there is no effect, is small, we reject the null hypothesis. (p>0.05) = homogeneity
P value is the probability that the measured difference would occur due to random chance alone if the null hypothesis were true.
How to describe and test whether a distribution of a variable deviates from normality?

Written for

Institution
Study
Course

Document information

Uploaded on
April 23, 2026
Number of pages
6
Written in
2024/2025
Type
OTHER
Person
Unknown

Subjects

$7.02
Get access to the full document:

Wrong document? Swap it for free Within 14 days of purchase and before downloading, you can choose a different document. You can simply spend the amount again.
Written by students who passed
Immediately available after payment
Read online or as PDF

Get to know the seller
Seller avatar
pienvanderv

Get to know the seller

Seller avatar
pienvanderv Intercultural Open University
Follow You need to be logged in order to follow users or courses
Sold
1
Member since
8 year
Number of followers
0
Documents
9
Last sold
4 days ago

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Working on your references?

Create accurate citations in APA, MLA and Harvard with our free citation generator.

Working on your references?

Frequently asked questions