Assignment 1: Discussion QuestionThis module covered the following topics:The Language of Association and PredictionPearson CorrelationThe Correlation MatrixSimple RegressionMultiple RegressionBy Friday, February 20, 2015, please post the questions you have about any of the above topics to the Discussion Area. Identify the specific topic and ask a clear, succinct question. Indicate where in the module, text, or WebEx demo your question occurs. This includes technical terms, equations, results, or interpretations.Then, review your fellow students questions. Is there anything you can clarify for them? Are you also confused about the same topic? Post your comment to at least two other students by Wednesday, February 25, 2015. (The instructor will also expand and clarify.)All written assignments and responses should follow APA rules for attributing sources.Assignment 1 Grading CriteriaPresented a clear and thoughtful question or comment.Used vocabulary relevant to the current module’s topics.Participated in the discussion by asking a question, providing a statement of clarification, providing a point of view with rationale, challenging a point of discussion, or making a relationship between one or more points of the discussion.Justified ideas and responses by using appropriate examples and references from texts, Web sites, and otherreferences or personal experience. ___________Unit 6 : Module 6 – M6 Assignment 2 QuizAccess dates:Can be reviewed in Gradebook on:Number of times this exam can be taken:Time allowed to complete:2/19/2015 12:00:00 AM to 2/25/2015 11:59:00 PM2/25/2015 12:00:00 AM11hAssignment 2: QuizDue Sunday, February 22, 2015.For each of the research questions below, identify thevariables and the method of analysis to best answer thequestion. The best method of analysis may be materialcovered in Modules 3, 4, 5, or 6.Present your answer with the following template where IV=Independent Variable(s) and DV= Dependent Variable(s).For each Variable identify if it is continuous orcategorical/dichotomous to help you identify the bestmethod_IV=DV=Covarite=Best method of analysis=Assignment 2 Grading CriteriaMaximumPointsAnswered all 10 questions accurately: (6pts/question)Total:Identified the IV , DV , Covariate,or designated it as not applicablen/a for each question (3 pts)60Identified the best method ofanalysis for each question (3 pts)60____________________________________________________________________________Assignment 3: Application1. Go to Doc Sharing and choose the database and assignment file you plan towork with for this module. It is highly recommended that you continue to use this filethroughout the rest of the course.Name of DatabaseName of Assignment FileR7031business.savR7031business.savassignments.docR7031counseling.savR7031counseling.savassignments.docR7031education.savR7031education.savassignments.docFollow the instructions for the Module 6 Assignment. There are threequestions to answer.Your assignment will be submitted as a Microsoft Word document. Transferappropriate SPSS tables (see note below) into your document to support yourconclusions.Name your assignment R7031M6yourlastname.doc and submit it to the M6:Assignment 3 Dropbox by Tuesday, February 24, 2015.NOTE:1. In SPSS, set your tables to APA Style.2. From the data window (spreadsheet), select Edit and then Options from themenu.3. In the options Menu, click on the Pivot Tables folder.4. Select the academic style and then click OK. All tables will now be producedusing the selected format.5. Bring your tables into Microsoft Word by:___________________________________________________________________Calculate your statistic and p-valueASSUMPTIONS.
Letâs first check to see if the assumptions have been met.Normal Distribution of the variables: Examining the measures of central tendency
and variability, we see that all of the variables have not violated this
assumption: the means, medians and modes are âclose,â and the indicators of
skewness and kurtosis are well within the normal range (close to 0.00).AgeAverage number of hours of sleep during weekN Valid4040Missing00Mean44.806.86Median42.006.75Mode41.006.00Std. Deviation10.431.275Skewness.710.411Std. Error of
Skewness.374.374Kurtosis-.247-.131Std. Error of
Kurtosis.733.733Minimum30.004.00Maximum69.0010.00LinearityNext, we look at the
scatterplots to see if there is enough of a linear relationship. We visualize
an âellipseâ around the dots.Homoscedasticityâthe
Distribution of ResidualsWe examine the plot of
the residuals as compared to the predicted values, The lack of a pattern
(i.e., the dots are scattered randomly) supports the assumption of
homoscedasticity (equal variance of error across values of the independent
variables).StatisticsLetâs take a look at
the statistics that will answer our research questions.
Have we explained a large portion of variance?Model Summary(b)ModelRR SquareAdjusted R SquareStd. Error of the Estimate1.557.310.2921.07a Predictors:
(Constant), Age
b Dependent Variable: Average number of hours of sleep during weekThere is R, which
is the measure of associationâin this case, the same as the
Pearson r = -.557. While the Pearsonâs r coefficient indicates direction
(positive or negative), the same statistic in regression is only looking at
strength and not direction. That explains why the R statistic in the Model
Summary (b) does not have a negative sign like the Pearsonâs r.R2 =
.310, or 31% of the variance is explainedâ69% is not. The adjusted R2reduces
the R2 by taking into account the sample size and the
number of independent variables in the regression model (it becomes smaller
as we have fewer observations per independent variable). The Standard Error
of the Estimate is the standard deviation of the distribution of residuals.Is this amount of
variance accounted for significant? Look at the ANOVA SOURCE TABLE.ModelSum of SquaresdfMean SquareFSig.Regression19.701119.70117.0950.000Residual43.793381.152Total63.49439a Predictors:
(Constant), Age
b Dependent Variable: Average number of hours of sleep during weekThe F-test(the
ratio of regression to residual variation) is F(1, 38) = 17.095,
p<001. The model is statistically significant.And last, we examine
the regression coefficient. In the case of one predictor, ? (Beta) is the
same as the correlation coefficient, -.557. So we can say this indicates a
moderate negative impact, that is statistically significant, at t = -4.135,
p<.001.Coefficients(a)ModelUnstandardized CoefficientsStandardized CoefficientstSig.BStd. ErrorBeta1(Constant)9.9150.75813.0870.000Age-0.0680.016-0.557-4.1350.00Dependent Variable: Average number of hours of
sleep during weekRetain/reject the null hypothesisWe have two research
questions and two hypotheses.1. Does Age predict sleep?
H0: R2 = 0
H1: R2>0R2 =
.310, and the F-test (the ratio of regression to residual variation is F(1,
38) = 17.095, p<.001. The model is statistically significant.We reject the null hypothesis.2. How good a
predictor is Age?
H0: B = 0
H1: B ? 0The standardized
coefficient ? = -.557, indicate a moderate negative impact, t = -4.135, p
<.001.We reject the null hypothesis.Risk of Type I and Type II errorType I Error
R2 is a measure of effect size, and the interpretation
using the effect size chart from previous lessons can be applied here.
Notice that it is similar to the discussion of size and strength.Value of dHow strong the effect0.00 to .20No effect to small effect.21 to .33Small to moderate effect.34 to .50Moderately strong effect.51 to .75Strong effect.76 or moreVery strong effectRecall too that when
R2 = .310, only 31% of the variance is explained, 69% is unexplained.
While it is statistically significant, this should be interpreted
cautiously.Type II Error
Prediction question using survey methods for data collection are
notoriously at risk for Type II error because of the consequences of
unreliability on measuring the strength and direction of the predictive
relationship. In our case, we are relying on participants to be honest and
accurate about their sleep and their age! Unreliable measures can obscure
the strength of relationship so that you may erroneously accept the null
hypothesis.Reliability can be
enhanced during the design and data collection process.State your results in APA style and formatIn writing this up
for his dissertation, Mr. Shelby would state:A Simple Regression
was used to examine age as a predictor of sleep. Results suggest age is a
moderate predictor of sleep, and the model is statistically significant, R2 =
.31, F(1, 38) = 17.10, p<.001.However, 69% of the variance
remains unexplained. The standardized coefficient ? = -.56,
indicates a moderate negative impact, t = -4.14, p < .001,
indicating as age increases, sleep decreases.Go to Doc Sharing to
download the M7 example database found in R7031 Example databases student
zip file.Go to Doc Sharing to
download the WebEx tutorial links. View the R7031 M7 Simple Regression
tutorial.Multiple RegressionMoving From Simple to Multiple RegressionRecall the example from simple regression:
âA student mentioned she has a hard time staying awake in class because she
is having trouble sleeping. Mr. Shelby wonders if this is true for most
students. The class calculates the mean hours of weeknight sleep. Then, a
woman in her early fifties arrives late, and the instructor asks the class,
'Letâs predict how much weeknight sleep she gets.'"The choices are:PredictionMethodWorstGuess.BetterUse the mean.Even BetterUse a variable (like age) to
improve upon the mean as a predictor.The BestUse ANOTHER variable in
addition to age (like coffee consumption) to improve prediction even
more.Multiple regression involves using two or more independent
variables to improve prediction, i.e., to explain more variance in the
dependent variable.
Using the variance pie diagram, we can see that in the case of one
predictor, we can âexplainâ some part of the variance in the dependent
variable, as indicated where the circles overlap (green). But thereâs a lot
left in Y that remains âunexplainedâ (yellow).Figure M7.14.With more than one variable, the amount of unexplained
variance is reduced, and our ability to predict becomes more
accurate. Figure M7.14 shows that there is more explained
variance in Y and much less âunexplainedâ variance in Y (yellow) when
using threepredictors.So the formula for the line to be solved looks like this:Y = a + b1X1 + b2X2 + ⦠bnXnThe ââ¦BnXnâ indicates that you can keep on adding variables.In the case of Mr. Shelby, recallâHe is interested in
quality of life of working adults in graduate school. He collected data on
age, sleep patterns, and caffeine consumption from 40 women who were about
to take their comp exams. He knows from reading the published literature
that that sleep patterns are influenced by age and anxiousness.So, he could create a predictive question with many predictor
variables, and ask two research questions.Can we predict average
weeknight sleep from a combination of variablesâin this case age, and
anxiety?Which variables are the
âbestâ predictors?In education, this kind of question is often used when asking
if studentsâ self-perceptions have an impact on performance. So the
research question might be:Do perceptions of
self-efficacy AND perceptions of belonging predict academic
performance?In organizational leadership, this kind of question is often
used when asking what leadership characteristics best predict
profitability. So the research question might be:Does tenure of leadership
AND communication effectiveness predict profitability?In counseling, this kind of question is often used when asking
what variables are most important in predicting the severity of depression.
The research question might be:Does number of previous
admissions AND client age predict the severity of the current depressive
episode?Multiple Regression: MethodsMethods for
Multiple RegressionImagine you are
making a stew, and you have a pile of vegetables to put in the pot.
Some cook quickly, some take a long time to cook.You can use several
strategies to add the veggies:Add them all in at once.Add them in separate groups of veggies according to
their cooking time.Let an expert cook (other than you) add them in
according to their specifications.Some combination of the above.Adding variables
into a multiple regression equation offers the same kinds of choices. Each
choice has a different name and a different rationale.Name of the
TechniqueWhat it doesAdvantagesSimultaneous
(called âEnterâ in SPSS)
(1. Add them all in at once)Adds all the variables
in at onceâsimultaneouslyIf you are
primarily interested in how much variance is explained and donât care
about order or which variables are important.Hierarchical
(called âEnterâ + âNextâ in SPSS)
(2. Add them in separate groups according to their cooking time)Adds variables in
separate groups you designateAllows you to
examine the impact of groups of variables in terms of how each group
increases explained variance. Also good if you have a hypothesis about
the temporal sequence of the variables.Stepwise
(3. Let an expert cook [other than you] and add them in one at a time)The computer
determines what variables to add inâOR take outâon the basis of
maximizing explained varianceAllows you to see
at each âstepâ how much unique variance each variable contributes. Also
good if you are exploring the best combination of predictors and you do
not have a specific hypothesis about sequence.Enter, Stepwise,
Forward, Backward, Remove
(4. Some Combination of the Above)You alone or you
in combination with the computer determine the order of entry.If you have a
hypothesis you are testing, this gives you total control over the
process.A
Word about Unique VarianceA variable is
considered a âgoodâ predictor when it explains variance in Y (i.e., there
is a lot of overlap in the X1-Y circle), but is independent of (i.e.,
minimum overlap) with other predictors (i.e., X2 circle).In the first
picture, see that predictors X1 and X2 explain UNIQUE variance in Y, and
have no SHARED variance.In the second
picture, notice that X1 and X2 overlap with each other as much as with Y.
Statistically, it means they are correlated with each other as much as with
the Y variable.Statistically, the
addition of variable X2 explains only a very small part of unique variance
(the orange bit), while X1 explains much more unique variance (the green).
So it is likely X2 will not show up in the final solution. Itâs like adding
red potatoes and yellow potatoes to your stew: you get a little extra
color, but no added impact on flavor!Multiple Regression: StatisticsStatistics
for Multiple Regression
We will focus on the stepwise procedure for this example. Since Mr. Shelby
doesnât have a hypothesis about which variables should go first, this is an
appropriate technique to choose.The statistics used
to interpret multiple regression analysis are much the same as simple
regressionâonly more of them, and one for each variable or variable group
you add.SPSS makes the choice
of what to include based on the variable that will explain the most UNIQUE
variance. Once that variable is âinâ the equation, SPSS looks at the
remaining variables and goes through the choosing process until either all
the variables are in the final module or the ones left out do not account
for sufficient unique variance to be included.The statistics
include:R2 for each time a variable is entered. SPSS
uses the term âmodelâ to denote the unique event of a variable being
added or taken away.An F statistic to interpret
the significance of the model each time a variable is added.A table of the âleftoversââthe excluded
variables that werenât included, and a statistic that
tells us why.The source table for
multiple regression analysis should look familiar. Model 1 corresponds to
the first variable that is entered. Model 2 corresponds to the second
variable that is entered.ModelSum of SquaresdfMean SquareFSig.1 Regression
Residual
TotalSSreg
SSres
SStotNo. of Predictors
N-2
N-1SSreg
df SSreg
dfMSreg
MSres2 Regression
Residual
TotalSSreg
SSres
SStotNo. of Predictors
N-3
N-2SSreg
df SSreg
dfMSreg
MSresNotice that for
every variable entered, a model is generated, and each one produces
an F-Test. That way you can see if significance
increases with each variable added.We also examine
the regression coefficients. Since we have more than one
variable, the regression coefficients become useful.The unstandardized coefficients
(B) are used to create the formula for the line.The standardizedcoefficients
(?) are compared to see the size and direction (positive or
negative).Size and direction tell us which of the
predictors are most important.So the results of stepwise
regression analysis can tell us:
1. How much variance is explainedâR2 AND how much R2 goes
up each time we add a predictor.
2. Whether the amount of variance is statistically significant for
each model.
3. The importance of each predictor in terms of the size and
direction of the standardized regression coefficient.
4. Why some variables were included and others excluded..jpg" alt="http://myeclassonline.com/ec/courses/AUO_files/AU_img.gif">Multiple Regression: Assumptions.png” alt=”http://myeclassonline.com/ec/courses/AUO_files/AU_spacer.gif”>AssumptionsMultiple regression
analysis builds on the three important assumptions we reviewed in simple
regression and adds two more that should be checked before interpreting the
results.Both X and Y variables must be close to a normal
distribution. You can check this by examining the measures of central
tendency, variability, skew, and kurtosis. This is called the normality of
the variables assumption.The X and Y variables must form a linear
relationship. That is, the scatterplot of X and Y must approximate a
straight line, not a curved line. This is called the linearity assumption.The distribution of the residuals must be even
about the same for all predicted scores. This is called the homoscedasticity assumption.
In other words, you cannot have a lot âerrorâ at lower ranges of Y and
a little at the top ranges. The amount of error from low to high
values of Y needs to approximate the normal distribution.The errors of the dependent variables are normally
distributed. This is called the normality of the residual assumption. The independent variables are uncorrelated with
each other (there is a minimum of overlapping circles among the
predictors). This is calledmulticolinearity.SPSS provides
scatterplots and statistics to assess the extent to which assumptions have
been violated (or not).These assumptions
are very important, for each time you add in a new variable, you are adding
in âerrorââunwanted variance. This could increase the risk of Type I error.Multiple Regression: Testing (1 of 3)Review of
Hypothesis Testing ModelBack to our example.
The soon-to-be Dr. Shelby was interested in the variables that could impact
the health of working women attending graduate school. Heâs considering how
age and self-assessment of anxiety can impact regular weekday
sleeping.1. State the
hypothesisWe have two research
questions and two hypotheses.Does the combination of independent variables
predict Average Weeknight Sleep?H0: R2 =
0
H1: R2>0Which independent variables are the best
predictors?H0: B = 0
H1: B ? 02. State
your ?levelFor this question,
alpha is set at .05.3. Collect
the dataGo to Doc
Sharing to download the M7 Example Database found in the R7031
Example databases student zip file. 4. Calculate
your statistic and p-valueASSUMPTIONS. First
we check these.1. NORMALITY OF THE
VARIABLES.
We examine the descriptive statistics to verify normality of each of our
variables. Do these look normal?AgeSelf-rating
on Anxiety ScaleAverage
number of hours of sleep during weekNValid404040Missing000Mean44.805.906.862Median42.00006.00006.750Mode41.006.006.00Std. Deviation10.432.0981.276Skewness.710-.019.411Std. Error of Skewness.374.374.374Kurtosis-.247-.200-.131Std. Error of Kurtosis.733.733.733Minimum30.001.004.00Maximum69.0010.0010.00Using the
navigation on the left, please proceed to the next page.Multiple Regression: Testing (2 of 3)2. LINEARITY.
SPSS produces a scatterplot for each of the variables used in the final analysis.
These are partial correlation plots, meaning it displays the UNIQUE
relationship between each independent variable and the dependent variable.
You can see that both variables have a linear relationship with the
dependent variableâboth negative.3. HOMOSCEDASTICITY.
Here we examine the residuals against the predicted values by seeing how
randomly the dots are distributed. There does not appear to be a
clear pattern. This supports the assumption of consistency of error across
all values of the independent variable.Using the
navigation on the left, please proceed to the next page.Multiple Regression: Testing (3 of 3)4. NORMALITY
OF THE RESIDUAL. The straight line of the P-P plot is the benchmark.
This is how the data would appear if they were âmultivariate normal.â
Multivariate refers to the fact that we have more than one variable and
cannot just look at “the distribution” to see if it is normally
distributed. We are looking at the normality of all the predictor variables
in combination. So we look at the pattern of dots to see how closely they
fit the line. Some of the dots are only slightly off the line at the higher
values. Therefore we can say we have met the assumption of normality.5. MULTICOLINEARITY.
Refers to the correlation among the IV or predictor variables. Examining
the correlation matrix we see that the two independent variables (age and
anxiety) have a weak positive correlation, r = .258, p =.054. Concerns for
multicolinearity should arise only when correlations are >.70
(+/-). So, we have met this assumption as well.Average number of
hours of sleep during weekAgeSelf-rating on
Anxiety ScalePearson CorrelationAverage number of
hours of sleep during week1.000-.557-.551Age-.5571.000.258Self-rating on
Anxiety Scale-.551.0001.000Sig. (1-tailed)Average number of
hours of sleep during week.000.000Age.000.054Self-rating on Anxiety
Scale.000.054NAverage number of
hours of sleep during week404040Age404040Self-rating on
Anxiety Scale404040Using the
navigation on the left, pl.jpg” alt=”http://myeclassonline.com/ec/courses/AUO_files/AU_img.gif”>Multiple Regression: Effects.png” alt=”http://myeclassonline.com/ec/courses/AUO_files/AU_spacer.gif”>Effects of
Variables and SignificanceR2 is
the statistic that is used to answer the first question: Do the independent
variables predict the dependent variable? We look at this table:ModelRR SquareAdjusted R SquareStd. Error of the
EstimateChange StatisticsSig. F ChangeR Square Changedf1df2F Change1
2-.557(a)
.699(b).310
.488.292
.4611.07352
.93711.310
.17817.095
12.8681
138
37.000
.001a Predictors:
(Constant), Age
b Predictors: (Constant), Age, Self-rating on Anxiety Scale
c Dependent Variable: Average number of hours of sleep during weekThere is R, which is
the measure of association â in this case, the same as the
Pearson r = .-.557. While the Pearsonâs r coefficient indicates direction
(positive or negative), the same statistic in regression is only looking at
strength and not direction. That explains why the R statistic in the Model
Summary (b) does not have a negative sign like the Pearsonâs r.From the Model
Summary we can see that there are two Models, meaning two variables have
been entered. The first four columns (in pink) describe how much variance
each variable is accounting for. The âChange Statisticsâ (in blue) indicate
if there is a change in impact from one to two variables.Reading across the
Model 1 row, the first variable, age, accounts for 31% of the variance (R2 =.310).
This is statistically significant, F(1, 38) = 17.095, p <.001.Reading across the
Model 2 row, the addition of the second variable, Self-rating on Anxiety
Scale, adds just a little more unique variance (R Square Change = .178) so
that the total amount of variance explained, R2 using two
variables is .488. The amount of variance accounted for is almost 49%. This
is also statistically significant, F(1, 37) = 12.868, p = .001.The ANOVA source
table tells if the Models are significant.ModelSum of SquaresdfMean SquareFSig.1Regression19.701119.70117.095.000(a)Residual43.793381.152Total63.494392Regression31.001215.50117.651.000(b)Residual32.49237.878Total63.49439a Predictors:
(Constant), Age
b Predictors: (Constant), Age, Self-rating on Anxiety Scale
c Dependent Variable: Average number of hours of sleep during weekThe F test for the
first Model is F(1, 38) = 17.095, p <.001. The model is statistically
significant.The F test for the
second Model is F(2, 37) = 17.651, p <.001. The model is statistically
significant.Regression
CoefficientsAnd last, we examine
the regression coefficients in the final model. There are two variables, so
we can look at ? (Beta) and determine that both predictor variables are
significant.Model 1 (in pink): For Age, Beta = -.557 t =
-4.135, p <.001.Model 2 (in blue): When the two predictors are
added, the Beta of Age changes to -.444, t = -3.650, p = .001. For
Anxiety, Beta = -.437, t = -3.587, p = .001.When both predictors
are included, both age and anxiousness significantly predict average hours
of sleep. Since the value of Beta for Age is slightly higher (we ignore the
minus sign), this indicates that Age is slightly more important.Coefficients(a)ModelUnstandardized CoefficientsStandardized Coefficientst.Sig.BStd. ErrorBetaBStd. Error1(Constant)9.915.75813.087.000Age-.068.016-.557-4.135.0002(Constant)10.865.71215.252Age-.054.015-.444-3.650.001Self-rating on
Anxiety Scale-.266.074-.437-3.587.001a Dependent
Variable: Average number of hours of sleep during weekOur formula for the
best fitting line is (B values highlighted in yellow):Y = B1X1 + B2X2 + aY = -.054X1 -.266X2
+ 10.865Notice for the
formula we use the Unstandardized Coefficients (B).Using the
navigation on the left, please proceed to the next page..jpg" alt="http://myeclassonline.com/ec/courses/AUO_files/AU_img.gif">Multiple Regression: Null & Risk
Factors.png” alt=”http://myeclassonline.com/ec/courses/AUO_files/AU_spacer.gif”>5. Retain/reject the
null hypothesisWe have two research
questions and two hypotheses.
1. Does the combination of independent variables predict Average
Weeknight Sleep?
H0: R2 = 0
H1: R2>0The first variable,
Age, accounts for 31% of the variance. The addition of the second variable,
Self-rating on Anxiety Scale, adds a more unique variance (.178) so that
the total amount of variance explained, R2 using two
variables, is .488, or almost 49% of the variance explained.The
model is statistically significant.We reject the null hypothesis.2. Which
independent Variables are the best predictors?
H0: ? = 0
H1: ? ? 0For Model 1 with
Age, Beta = .557 t = -4.135, p <.001. We reject the null hypothesis.
For Model 2 with Age and Self-Rating of Anxiety: When the two predictors
are added, the Beta of Age changes to -.444, t = -3.650, p =.001. For
Anxiety, Beta =
-.437, t = -3.587, p =.001. Again, we reject the null hypothesis.When both predictors
are included, both age and anxiousness significantly predict average hours
of sleep. Since the value of Beta for Age is slightly higher (we ignore the
minus sign), this indicates that Age is slightly more important than
anxiety in predicting sleep. Because the Betas are negative, we know that
each has a negative impact on sleep. That is, as age increases, sleep
decreases. As anxiety increases sleep decreases.6. Risk of
Type I and Type II errorType
I Error
R2 is a measure of effect size, and the interpretation
using the effect size chart from previous lessons can be applied here.
Notice that it is similar to the discussion of size and strength.Value of dHow strong the
effect0.00 to .20No effect to small
effect.21 to .33Small to moderate
effect.34 to .50Moderately strong
effect.51 to .75Strong effect.76 or moreVery strong effectRecall that R2 =
.488, or almost 49% of the variance is explained; the remaining 51% is
unexplained. R2 is statistically significant, and has a
moderately strong effect size.Also, all of the
assumptions have been met, suggesting that the risk of Type I error has
been minimized.Type
II error There are two big risks in multiple regression. One is the
risk of unreliable measures which weâve mentioned before. The other
is sample size. In order to have sufficient power, it is
recommended that the researcher have 12 to 15 cases per variable in
stepwise regression. We have 40 participants, or 20 cases per variable,
which is acceptable. So the risk for Type II error is low..jpg" alt="http://myeclassonline.com/ec/courses/AUO_files/AU_img.gif">Multiple Regression: Written Conclusions.png” alt=”http://myeclassonline.com/ec/courses/AUO_files/AU_spacer.gif”>7. State
your results in APA style and formatIn writing this up
for his dissertation, Mr. Shelby would state:A Multiple
Regression analysis was run examining Age and Self-rating on Anxiety Scale
as predictors of sleep. Results indicate a statistically significant model,
F(2, 37) = 17.651, p <.001. The first variable, Age, accounts for 31% of
the variance. The addition of the second variable, Self-rating on Anxiety
Scale, contributes 17.8% unique variance; R2 = 49%.Regarding the
predictive value of the Independent variables, for Model 1 with Age, Beta =
.557 t, = -4.135, p <.001. We reject the null hypothesis. Age is a
significant predictor of sleep. When Age and Self-Rating of Anxiety were
added, the Beta of Age changes to -.444, t = -3.650, p = .001. For Anxiety,
Beta = -.437, t = -3.587, p = .001. Again, we reject the null hypothesis.
Thus, amount of sleep is influenced by age and anxiety: the older and more
anxious, the less sleep they experience.When both predictors
are included, both age and anxiousness significantly predict average hours
of sleep. Since the value of Beta for Age is slightly higher (we ignore the
minus sign), this indicates that Age is slightly more important.Go to Doc
Sharing to download the output file of the M6 example database found
in the R7031 Example databases student zip file.Go to Doc
Sharing to download the WebEx tutorial links. View theR7031 M6 Multiple
Regression tutorial.
Professional Dissertation Editing Help
Professional Dissertation Editing Help Editing your dissertation is an important step in the process of completing your thesis project. It is essential to work with the best dissertation editors to ensure that your work is polished and perfect. There are many websites that offer dissertation editing services, but it is important to choose a company […]